What is BI Testing and The Importance of BI Report TestingTorana, Inc.
Reports help to make business decisions and inaccurate reports can affect the reliability of the organizations. BI testing is the key to improving the quality of BI Reports. BI testing helps to prevent making the wrong strategic decision. iCEDQ helps business by automating BI testing and ensuring the quality of reports. Business Intelligence Platform helps to empower your business. To know more about BI Report Testing visit https://icedq.com/bi-testing/what-is-bi-testing-and-the-importance-of-bi-report-testing.
Software Engineering Patterns for Machine Learning ApplicationsHironori Washizaki
Hironori Washizaki, Software Engineering Patterns for Machine Learning Applications, 2021 IEEE International Conference on Electronic Technology, Communication and Information (ICETCI 2021), Keynote, August 28, Online, 2021.
High Availability & Disaster Recovery with SQL Server 2012 AlwaysOn Availabil...turgaysahtiyan
The AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a set of user databases for an enterprise. In this session we will talk about what’s coming with Always On, and how does it help to improve high availability and disaster recovery solutions.
What is BI Testing and The Importance of BI Report TestingTorana, Inc.
Reports help to make business decisions and inaccurate reports can affect the reliability of the organizations. BI testing is the key to improving the quality of BI Reports. BI testing helps to prevent making the wrong strategic decision. iCEDQ helps business by automating BI testing and ensuring the quality of reports. Business Intelligence Platform helps to empower your business. To know more about BI Report Testing visit https://icedq.com/bi-testing/what-is-bi-testing-and-the-importance-of-bi-report-testing.
Software Engineering Patterns for Machine Learning ApplicationsHironori Washizaki
Hironori Washizaki, Software Engineering Patterns for Machine Learning Applications, 2021 IEEE International Conference on Electronic Technology, Communication and Information (ICETCI 2021), Keynote, August 28, Online, 2021.
High Availability & Disaster Recovery with SQL Server 2012 AlwaysOn Availabil...turgaysahtiyan
The AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a set of user databases for an enterprise. In this session we will talk about what’s coming with Always On, and how does it help to improve high availability and disaster recovery solutions.
Getting the service description (WSDL)
Configure Service Bus
Import Resources
Configure Business Service
Config ure the Credit Card Validation Proxy
Configure Message Flow(Validate & Report)
Adding a Pipeline Pair ->Add Stage ->Add Action(Reporting) ->Add Validate Action
This presentation shows new features in SQL 2019, and a recap of features from SQL 2000 through 2017 as well. You would be wise to hear someone from Microsoft deliver this material.
In Data Engineer's Lunch #54, we will discuss the data build tool, a tool for managing data transformations with config files rather than code. We will be connecting it to Apache Spark and using it to perform transformations.
Accompanying YouTube: https://youtu.be/dwZlYG6RCSY
Sign Up For Our Newsletter: http://eepurl.com/grdMkn
Join Data Engineer’s Lunch Weekly at 12 PM EST Every Monday:
https://www.meetup.com/Data-Wranglers-DC/events/
Cassandra.Link:
https://cassandra.link/
Follow Us and Reach Us At:
Anant:
https://www.anant.us/
Awesome Cassandra:
https://github.com/Anant/awesome-cassandra
Email:
solutions@anant.us
LinkedIn:
https://www.linkedin.com/company/anant/
Twitter:
https://twitter.com/anantcorp
Eventbrite:
https://www.eventbrite.com/o/anant-1072927283
Facebook:
https://www.facebook.com/AnantCorp/
Join The Anant Team:
https://www.careers.anant.us
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
Free and useful tools have proliferated since the launch of the CodePlex and SourceForge websites. Join Kevin Kline, long-time author of the SQL Server Magazine column "Tool Time", as he profiles the very best of the free tools covered in his monthly column - dozens of free tools and utilities! Some of the cover tools help to:
- Track database growth
- Implement logging in SSIS job steps
- Stress test your database applications
- Automate important preventative maintenance tasks
- Automate maintenance tasks for Analysis Services
- Help protect against SQL Injection attacks
- Graphically manage Extended Events
- Utilize PowerShell scripts to ease administration
And much more. These tools are all free and independently supported by SQL Server enthusiasts around the world.
Microsoft SQL server 2017 Level 300 technical deckGeorge Walters
This deck covers new features in SQL Server 2017, as well as carryover features from 2012 onwards. This includes high availability, columnstore, alwayson, In-memory tables, and other enterprise features.
Microsoft Data Platform - What's includedJames Serra
The pace of Microsoft product innovation is so fast that even though I spend half my days learning, I struggle to keep up. And as I work with customers I find they are often in the dark about many of the products that we have since they are focused on just keeping what they have running and putting out fires. So, let me cover what products you might have missed in the Microsoft data platform world. Be prepared to discover all the various Microsoft technologies and products for collecting data, transforming it, storing it, and visualizing it. My goal is to help you not only understand each product but understand how they all fit together and there proper use case, allowing you to build the appropriate solution that can incorporate any data in the future no matter the size, frequency, or type. Along the way we will touch on technologies covering NoSQL, Hadoop, and open source.
Getting the service description (WSDL)
Configure Service Bus
Import Resources
Configure Business Service
Config ure the Credit Card Validation Proxy
Configure Message Flow(Validate & Report)
Adding a Pipeline Pair ->Add Stage ->Add Action(Reporting) ->Add Validate Action
This presentation shows new features in SQL 2019, and a recap of features from SQL 2000 through 2017 as well. You would be wise to hear someone from Microsoft deliver this material.
In Data Engineer's Lunch #54, we will discuss the data build tool, a tool for managing data transformations with config files rather than code. We will be connecting it to Apache Spark and using it to perform transformations.
Accompanying YouTube: https://youtu.be/dwZlYG6RCSY
Sign Up For Our Newsletter: http://eepurl.com/grdMkn
Join Data Engineer’s Lunch Weekly at 12 PM EST Every Monday:
https://www.meetup.com/Data-Wranglers-DC/events/
Cassandra.Link:
https://cassandra.link/
Follow Us and Reach Us At:
Anant:
https://www.anant.us/
Awesome Cassandra:
https://github.com/Anant/awesome-cassandra
Email:
solutions@anant.us
LinkedIn:
https://www.linkedin.com/company/anant/
Twitter:
https://twitter.com/anantcorp
Eventbrite:
https://www.eventbrite.com/o/anant-1072927283
Facebook:
https://www.facebook.com/AnantCorp/
Join The Anant Team:
https://www.careers.anant.us
Embarking on building a modern data warehouse in the cloud can be an overwhelming experience due to the sheer number of products that can be used, especially when the use cases for many products overlap others. In this talk I will cover the use cases of many of the Microsoft products that you can use when building a modern data warehouse, broken down into four areas: ingest, store, prep, and model & serve. It’s a complicated story that I will try to simplify, giving blunt opinions of when to use what products and the pros/cons of each.
Free and useful tools have proliferated since the launch of the CodePlex and SourceForge websites. Join Kevin Kline, long-time author of the SQL Server Magazine column "Tool Time", as he profiles the very best of the free tools covered in his monthly column - dozens of free tools and utilities! Some of the cover tools help to:
- Track database growth
- Implement logging in SSIS job steps
- Stress test your database applications
- Automate important preventative maintenance tasks
- Automate maintenance tasks for Analysis Services
- Help protect against SQL Injection attacks
- Graphically manage Extended Events
- Utilize PowerShell scripts to ease administration
And much more. These tools are all free and independently supported by SQL Server enthusiasts around the world.
Microsoft SQL server 2017 Level 300 technical deckGeorge Walters
This deck covers new features in SQL Server 2017, as well as carryover features from 2012 onwards. This includes high availability, columnstore, alwayson, In-memory tables, and other enterprise features.
Microsoft Data Platform - What's includedJames Serra
The pace of Microsoft product innovation is so fast that even though I spend half my days learning, I struggle to keep up. And as I work with customers I find they are often in the dark about many of the products that we have since they are focused on just keeping what they have running and putting out fires. So, let me cover what products you might have missed in the Microsoft data platform world. Be prepared to discover all the various Microsoft technologies and products for collecting data, transforming it, storing it, and visualizing it. My goal is to help you not only understand each product but understand how they all fit together and there proper use case, allowing you to build the appropriate solution that can incorporate any data in the future no matter the size, frequency, or type. Along the way we will touch on technologies covering NoSQL, Hadoop, and open source.
Brk3288 sql server v.next with support on linux, windows and containers was...Bob Ward
SQL Server is bringing its world-class RDBMS to Linux and Windows with SQL Server v.Next. In this session you will learn what´s next for SQL Server on Linux and how application developers and IT architects can now leverage the enterprise class features of SQL Server in every edition on Linux, Windows and containers.
Introduction to SQL Server Analysis services 2008Tobias Koprowski
This is my presentation from 17th Polish SQL server User Group Meeting in Wroclaw. It\'s first part of Quadrology Bussiness Intelligence for ITPros Cycle.
Pass chapter meeting dec 2013 - compression a hidden gem for io heavy databas...Charley Hanania
Compression: a hidden Gem for IO heavy Databases
The limiting factor in most database systems is the ability to read and write data to the IO subsystem.
We're still using storage layouts and methodologies in SQL Server that are a reflection of old spinning media in times gone by.
Until major changes are made to the internal storage layouts, we have "some" hope with options such as data compression, sparse columns and filtered indexes, which not only save space on disk, but also reflect a saving in memory.
In this session we will go over the IO savings technologies presented in SQL Server, and discuss how implementing some of these will assist in your operational performance goals.
Presenter: Charley Hanania, MVP
Charley is Principal Consultant at QS2 AG in Switzerland and has consulted to organisations of all sizes during his extensive career in Database and Platform Consulting.
He's been focussed on SQL Server since v4.2 on OS/2 and with over 15 years of experience in IT he's supported companies in the areas of DB training, development, architecture & administration throughout Europe, America & Australasia.
Communities are Charley's passion and he became active in database communities in the mid 90's, participating in heterogeneous database user groups in Australia. He continues to lead an active role through community events such as Database Days, the European PASS Conference, PASS & the Swiss PASS Chapter.
Want to see a high-level overview of the products in the Microsoft data platform portfolio in Azure? I’ll cover products in the categories of OLTP, OLAP, data warehouse, storage, data transport, data prep, data lake, IaaS, PaaS, SMP/MPP, NoSQL, Hadoop, open source, reporting, machine learning, and AI. It’s a lot to digest but I’ll categorize the products and discuss their use cases to help you narrow down the best products for the solution you want to build.
SQL Server 2016 Overview and Editions
SQL Server Improvement
SQL Server and Windows
In Memory OLTP
Upgrade to SQL Server 2016
Upgrade Life Cycle
Planning
Upgrade Advisor
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
PASS Summit - SQL Server 2017 Deep DiveTravis Wright
Deep dive into SQL Server 2017 covering SQL Server on Linux, containers, HA improvements, SQL graph, machine learning, python, adaptive query processing, and much much more.
Differentiate Big Data vs Data Warehouse use cases for a cloud solutionJames Serra
It can be quite challenging keeping up with the frequent updates to the Microsoft products and understanding all their use cases and how all the products fit together. In this session we will differentiate the use cases for each of the Microsoft services, explaining and demonstrating what is good and what isn't, in order for you to position, design and deliver the proper adoption use cases for each with your customers. We will cover a wide range of products such as Databricks, SQL Data Warehouse, HDInsight, Azure Data Lake Analytics, Azure Data Lake Store, Blob storage, and AAS as well as high-level concepts such as when to use a data lake. We will also review the most common reference architectures (“patterns”) witnessed in customer adoption.
En esta sesión revisamos las nuevas mejoras y funcionalidades que estarán implementadas en la siguiente versión de SQL Server principalmente en Seguridad, Rendimiento y Alta Disponibilidad
Microsoft Azure zmienia się. Jego częśc poświęcona bazie danych (Windows Azure SQL Database) zmienia się jeszcze szybciej. Podczas tej sesji chciałbym pokazac tym, którzy nie widzieli, oraz przypomniec tym, którzy już coś wiedzą - o co chodzi z WASD, jakie zmiany nastapiły i czego możemy po tej bazie oczekiwać. Dla odważnych będzie okazja podłączenia się do konta w chmurze i przetestowania ych rozwiązań samemu.
Microsoft Fabric is the next version of Azure Data Factory, Azure Data Explorer, Azure Synapse Analytics, and Power BI. It brings all of these capabilities together into a single unified analytics platform that goes from the data lake to the business user in a SaaS-like environment. Therefore, the vision of Fabric is to be a one-stop shop for all the analytical needs for every enterprise and one platform for everyone from a citizen developer to a data engineer. Fabric will cover the complete spectrum of services including data movement, data lake, data engineering, data integration and data science, observational analytics, and business intelligence. With Fabric, there is no need to stitch together different services from multiple vendors. Instead, the customer enjoys end-to-end, highly integrated, single offering that is easy to understand, onboard, create and operate.
This is a hugely important new product from Microsoft and I will simplify your understanding of it via a presentation and demo.
Agenda:
What is Microsoft Fabric?
Workspaces and capacities
OneLake
Lakehouse
Data Warehouse
ADF
Power BI / DirectLake
Resources
Data Lakehouse, Data Mesh, and Data Fabric (r2)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a modern data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Data Warehousing Trends, Best Practices, and Future OutlookJames Serra
Over the last decade, the 3Vs of data - Volume, Velocity & Variety has grown massively. The Big Data revolution has completely changed the way companies collect, analyze & store data. Advancements in cloud-based data warehousing technologies have empowered companies to fully leverage big data without heavy investments both in terms of time and resources. But, that doesn’t mean building and managing a cloud data warehouse isn’t accompanied by any challenges. From deciding on a service provider to the design architecture, deploying a data warehouse tailored to your business needs is a strenuous undertaking. Looking to deploy a data warehouse to scale your company’s data infrastructure or still on the fence? In this presentation you will gain insights into the current Data Warehousing trends, best practices, and future outlook. Learn how to build your data warehouse with the help of real-life use-cases and discussion on commonly faced challenges. In this session you will learn:
- Choosing the best solution - Data Lake vs. Data Warehouse vs. Data Mart
- Choosing the best Data Warehouse design methodologies: Data Vault vs. Kimball vs. Inmon
- Step by step approach to building an effective data warehouse architecture
- Common reasons for the failure of data warehouse implementations and how to avoid them
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
The data lake has become extremely popular, but there is still confusion on how it should be used. In this presentation I will cover common big data architectures that use the data lake, the characteristics and benefits of a data lake, and how it works in conjunction with a relational data warehouse. Then I’ll go into details on using Azure Data Lake Store Gen2 as your data lake, and various typical use cases of the data lake. As a bonus I’ll talk about how to organize a data lake and discuss the various products that can be used in a modern data warehouse.
Power BI Overview, Deployment and GovernanceJames Serra
Deploying Power BI in a large enterprise is a complex task, and one that requires a lot of thought and planning. The purpose of this presentation is to help you make your Power BI deployment a success. After a quick Power BI overview, I’ll discuss deployment strategies, common usage scenarios, how to store and refresh data, prototyping options, how to share externally, and then finish with how to administer and secure Power BI. I’ll outline considerations and best practices for achieving an optimal, well-performing, enterprise level Power BI deployment.
Power BI has become a product with a ton of exciting features. This presentation will give an overview of some of them, including Power BI Desktop, Power BI service, what’s new, integration with other services, Power BI premium, and administration.
The breath and depth of Azure products that fall under the AI and ML umbrella can be difficult to follow. In this presentation I’ll first define exactly what AI, ML, and deep learning is, and then go over the various Microsoft AI and ML products and their use cases.
AI for an intelligent cloud and intelligent edge: Discover, deploy, and manag...James Serra
Discover, manage, deploy, monitor – rinse and repeat. In this session we show how Azure Machine Learning can be used to create the right AI model for your challenge and then easily customize it using your development tools while relying on Azure ML to optimize them to run in hardware accelerated environments for the cloud and the edge using FPGAs and Neural Network accelerators. We then show you how to deploy the model to highly scalable web services and nimble edge applications that Azure can manage and monitor for you. Finally, we illustrate how you can leverage the model telemetry to retrain and improve your content.
Power BI for Big Data and the New Look of Big Data SolutionsJames Serra
New features in Power BI give it enterprise tools, but that does not mean it automatically creates an enterprise solution. In this talk we will cover these new features (composite models, aggregations tables, dataflow) as well as Azure Data Lake Store Gen2, and describe the use cases and products of an individual, departmental, and enterprise big data solution. We will also talk about why a data warehouse and cubes still should be part of an enterprise solution, and how a data lake should be organized.
In three years I went from a complete unknown to a popular blogger, speaker at PASS Summit, a SQL Server MVP, and then joined Microsoft. Along the way I saw my yearly income triple. Is it because I know some secret? Is it because I am a genius? No! It is just about laying out your career path, setting goals, and doing the work.
I'll cover tips I learned over my career on everything from interviewing to building your personal brand. I'll discuss perm positions, consulting, contracting, working for Microsoft or partners, hot fields, in-demand skills, social media, networking, presenting, blogging, salary negotiating, dealing with recruiters, certifications, speaking at major conferences, resume tips, and keys to a high-paying career.
Your first step to enhancing your career will be to attend this session! Let me be your career coach!
Is the traditional data warehouse dead?James Serra
With new technologies such as Hive LLAP or Spark SQL, do I still need a data warehouse or can I just put everything in a data lake and report off of that? No! In the presentation I’ll discuss why you still need a relational data warehouse and how to use a data lake and a RDBMS data warehouse to get the best of both worlds. I will go into detail on the characteristics of a data lake and its benefits and why you still need data governance tasks in a data lake. I’ll also discuss using Hadoop as the data lake, data virtualization, and the need for OLAP in a big data solution. And I’ll put it all together by showing common big data architectures.
Databricks is a Software-as-a-Service-like experience (or Spark-as-a-service) that is a tool for curating and processing massive amounts of data and developing, training and deploying models on that data, and managing the whole workflow process throughout the project. It is for those who are comfortable with Apache Spark as it is 100% based on Spark and is extensible with support for Scala, Java, R, and Python alongside Spark SQL, GraphX, Streaming and Machine Learning Library (Mllib). It has built-in integration with many data sources, has a workflow scheduler, allows for real-time workspace collaboration, and has performance improvements over traditional Apache Spark.
Azure SQL Database Managed Instance is a new flavor of Azure SQL Database that is a game changer. It offers near-complete SQL Server compatibility and network isolation to easily lift and shift databases to Azure (you can literally backup an on-premise database and restore it into a Azure SQL Database Managed Instance). Think of it as an enhancement to Azure SQL Database that is built on the same PaaS infrastructure and maintains all it's features (i.e. active geo-replication, high availability, automatic backups, database advisor, threat detection, intelligent insights, vulnerability assessment, etc) but adds support for databases up to 35TB, VNET, SQL Agent, cross-database querying, replication, etc. So, you can migrate your databases from on-prem to Azure with very little migration effort which is a big improvement from the current Singleton or Elastic Pool flavors which can require substantial changes.
Learning to present and becoming good at itJames Serra
Have you been thinking about presenting at a user group? Are you being asked to present at your work? Is learning to present one of the keys to advancing your career? Or do you just think it would be fun to present but you are too nervous to try it? Well take the first step to becoming a presenter by attending this session and I will guide you through the process of learning to present and becoming good at it. It’s easier than you think! I am an introvert and was deathly afraid to speak in public. Now I love to present and it’s actually my main function in my job at Microsoft. I’ll share with you journey that lead me to speak at major conferences and the skills I learned along the way to become a good presenter and to get rid of the fear. You can do it!
Think of big data as all data, no matter what the volume, velocity, or variety. The simple truth is a traditional on-prem data warehouse will not handle big data. So what is Microsoft’s strategy for building a big data solution? And why is it best to have this solution in the cloud? That is what this presentation will cover. Be prepared to discover all the various Microsoft technologies and products from collecting data, transforming it, storing it, to visualizing it. My goal is to help you not only understand each product but understand how they all fit together, so you can be the hero who builds your companies big data solution.
Choosing technologies for a big data solution in the cloudJames Serra
Has your company been building data warehouses for years using SQL Server? And are you now tasked with creating or moving your data warehouse to the cloud and modernizing it to support “Big Data”? What technologies and tools should use? That is what this presentation will help you answer. First we will cover what questions to ask concerning data (type, size, frequency), reporting, performance needs, on-prem vs cloud, staff technology skills, OSS requirements, cost, and MDM needs. Then we will show you common big data architecture solutions and help you to answer questions such as: Where do I store the data? Should I use a data lake? Do I still need a cube? What about Hadoop/NoSQL? Do I need the power of MPP? Should I build a "logical data warehouse"? What is this lambda architecture? Can I use Hadoop for my DW? Finally, we’ll show some architectures of real-world customer big data solutions. Come to this session to get started down the path to making the proper technology choices in moving to the cloud.
DocumentDB is a powerful NoSQL solution. It provides elastic scale, high performance, global distribution, a flexible data model, and is fully managed. If you are looking for a scaled OLTP solution that is too much for SQL Server to handle (i.e. millions of transactions per second) and/or will be using JSON documents, DocumentDB is the answer.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Leading Change strategies and insights for effective change management pdf 1.pdf
What’s new in SQL Server 2017
1. What’s new in
SQL Server 2017
James Serra
Big Data Evangelist
Microsoft
JamesSerra3@gmail.com
2. About Me
Microsoft, Big Data Evangelist
In IT for 30 years, worked on many BI and DW projects
Worked as desktop/web/database developer, DBA, BI and DW architect and developer, MDM
architect, PDW/APS developer
Been perm employee, contractor, consultant, business owner
Presenter at PASS Business Analytics Conference, PASS Summit, Enterprise Data World conference
Certifications: MCSE: Data Platform, Business Intelligence; MS: Architecting Microsoft Azure
Solutions, Design and Implement Big Data Analytics Solutions, Design and Implement Cloud Data
Platform Solutions
Blog at JamesSerra.com
Former SQL Server MVP
Author of book “Reporting with Microsoft SQL Server 2012”
3. The power of SQL Server
Microsoft Tableau Oracle
$120
$480
$2,230
Self-service BI per userat massive scale
0 1
4
0 0
3
0
34
29
22
15
5
22
16
6
43
20
69
18
49
74
3
0
10
20
30
40
50
60
70
80
1 2 3 4 5 6 7
NIST comprehensive vulnerability database, June 2016
SQL Server Oracle MySQL2 SAP HANA
1M
Predictions
per second
Any platform Any data
1010
0101
0010
{}
Any language
T-SQL
Java
PHP
Node.js
C/C++
C#/VB.NET
Python
Ruby
#1
TPC-H non-clustered results, August 2017
#1 in 30TB, 10TB, 1TB TPC-H
Windows Server 2016
4. SQL Server 2017
Meeting you where you are
It’s the same SQL Server database engine with many features and services
available for all your applications—regardless of your operational ecosystem
Linux
Any data Any application Anywhere Choice of platform
T-SQL
Java
C/C++
C#/VB.NET
PHP
Node.js
Python
Ruby
1010
0101
0010
{ }
5. How we develop SQL
Cloud-first but not cloud-only
Use SQL Database to improve core SQL Server features and cadence
Many interesting and compelling on-premises cloud scenarios
SQL Server
and APS
Azure
SQL Virtual
Machines
Azure
SQL Database
Azure
SQL Data
Warehouse
6. A consistent experience from SQL Server on-premises to
Microsoft Azure IaaS and PaaS
On-premises, private cloud, and public cloud
SQL Server local (Windows and Linux), VMs (Windows and Linux),
containers, and SQL Database
Common development, management, and identity tools including Active
Directory, Visual Studio, Hyper-V, and System Center
Scalability, availability, security, identity, backup and restore, and replication
Many data sources
Reporting, integration, processing, and analytics
All supported in the hybrid cloud
Consistency and integration
8. Database Engine new features
Linux/Docker support
• RHEL, Ubuntu, SLES, and Docker
containers
Adaptive Query Processing
• Faster queries just by upgrading
• Interleaved execution
• Batch-mode memory grant feedback
• Batch-mode adaptive joins
9. Database Engine new features
Graph
• Store relationships using nodes/edges
• Analyze interconnected data using
node/edge query syntax
SELECT r.name
FROM Person AS p, likes AS l1, Person
AS p2, likes AS l2, Restaurant AS r
WHERE MATCH(p-(l1)->p2-(l2)->r)
AND p.name = 'Chris'
Automatic Tuning
• Automatic plan correction – identify, and optionally fix, problematic
query execution plans causing query performance problems
• Automatic index management – make index recommendations (Azure
SQL Database only)
10. Database Engine new features
Enhanced performance for natively complied T-SQL modules
• OPENJSON, FOR JSON, JSON
• CROSS APPLY operations
• Computed columns
New string functions
• TRIM, CONCAT_WS, TRANSLATE, and STRING_AGG with support for
WITHIN GROUP (ORDER BY)
BULK IMPORT now supports CSV format and Azure Blob Storage as
file source
11. Database Engine new features
Native scoring with T-SQL PREDICT
Resumable online index rebuild
• Pause/resume online index rebuilds
Clusterless read-scale availability
groups
• Unlimited, geo-distributed, linear read
scaling
P
S1
S2
S3
S4
12. Integration Services new features
Integration Services Scale Out
• Distribute SSIS package execution more easily across multiple workers, and
manage executions and workers from a single master computer
Integration Services on Linux
• TRIM, CONCAT_WS, TRANSLATE, and STRING_AGG with support for
WITHIN GROUP (ORDER BY)
Connectivity improvements
• Connect to the OData feeds of Microsoft Dynamics AX Online and
Microsoft Dynamics CRM Online with the updated OData components
13. Analysis Services new features
Object level security for tabular models
Get Data enhancements
• New data sources
• Modern experience for tabular models
Enhanced ragged hierarchy support
• New Hide Members property to hide blank members in ragged hierarchies
14. Reporting Services new features
Comments
• Comments are now available for reports, to add perspective and
collaborate with others. You can also include attachments with comments
Broader DAX support
• Report Builder and SQL Server Data Tools, you can create native DAX
queries against supported tabular data models by dragging and dropping
desired fields in the query designers.
Standalone installer
• SSRS is no longer distributed through SQL Server setup
15. Machine Learning Services new features
Python support
• Python and R scripts are now supported
• revoscalepy - Pythonic equivalent of RevoScaleR – parallel algorithms for
data processing with a rich API
MicrosoftML
• Package of machine learning algorithms and transforms (with Python
bindings), as well as pre-trained models for image extraction or sentiment
analysis
17. Power of the SQL Server Database Engine on the
platform of your choice
Linux distributions: RedHat Enterprise
Linux (RHEL), Ubuntu, and SUSE
Linux Enterprise Server (SLES)
Docker: Windows & Linux containers
Windows Server / Windows 10
Linux
Linux/Windows container
Windows
19. Supported platforms
Platform Supported version(s) Supported file system(s)
Red Hat Enterprise Linux 7.3 XFS or EXT4
SUSE Linux Enterprise Server v12 SP2 EXT4
Ubuntu 16.04 EXT4
Docker Engine (on Windows, Mac, or Linux) 1.8+ N/A
System requirements for SQL Server on Linux
20. Cross-System Architecture
SQL Platform Abstraction Layer
(SQLPAL)
RDBMS AS IS RS
Windows Linux
Windows
Host Ext.
Linux Host
Extension
SQL Platform Abstraction Layer
(SQLPAL)
Host Extension mapping to OS system calls
(IO, Memory, CPU scheduling)
Win32-like APIsSQL OS API
SQL OS v2
Everything else
System Resource &
Latency Sensitive
Code Paths
21. Tools and programmability
• Windows-based SQL Server tools -
like SSMS, SSDT, Profiler – work when
connected to SQL Server on Linux
• All existing drivers and frameworks
supported
• Third-party tools continue to work
• Native command-line tools – sqlcmd,
bcp
• Visual Studio Code mssql extension
22. Client Connectivity
Language Platform More Details
C# Windows, Linux, macOS Microsoft ADO.NET for SQL Server
Java Windows, Linux, macOS Microsoft JDBC Driver for SQL Server
PHP Windows, Linux, macOS PHP SQL Driver for SQL Server
Node.js Windows, Linux, macOS Node.js Driver for SQL Server
Python Windows, Linux, macOS Python SQL Driver
Ruby Windows, Linux, macOS Ruby Driver for SQL Server
C++ Windows, Linux, macOS Microsoft ODBC Driver for SQL Server
SQL Server client drivers are available for many programming languages, including:
23. What’s available on Linux?
Operations Features
• Support for RHEL, Ubuntu, SLES, Docker
• Package-based installs
• Support for Open Shift, Docker Swarm
• Failover Clustering via Pacemaker
• Backup / Restore
• SSMS on Windows connected to Linux
• Command line tools: sqlcmd, bcp
• Transparent Data Encryption
• Backup Encryption
• SCOM Management Pack
• DMVs
• Table partitioning
• SQL Server Agent
• Full-Text search
• Integration Services
• Active Directory (integrated)
authentication
• TLS for encrypted connections
24. What’s available on Linux?
Programming Features
• All major language driver compatibility
• In-Memory OLTP
• Columnstore indexes
• Query Store
• Compression
• Always Encrypted
• Row-Level Security, Data Masking
• Auditing
• Service Broker
• CLR
• JSON, XML
• Third-party tools
27. In-Memory Online Transaction Processing (OLTP)
In-Memory OLTP is the premier technology available in SQL Server and Azure
SQL Database for optimizing performance of transaction processing, data
ingestion, data load, and transient data scenarios
Memory-optimized tables outperform traditional disk-based tables,
leading to more responsive transactional applications
Memory-optimized tables also improve throughput and reduce
latency for transaction processing, and can help improve performance
of transient data scenarios such as temp tables and ETL
28. In-Memory OLTP enhancements (SQL Server 2017)
sp_spaceused is now supported for memory-optimized tables.
sp_rename is now supported for memory-optimized tables and natively compiled T-SQL modules.
CASE statements are now supported for natively compiled T-SQL modules.
The limitation of 8 indexes on memory-optimized tables has been eliminated.
TOP (N) WITH TIES is now supported in natively compiled T-SQL modules.
ALTER TABLE against memory-optimized tables is now substantially faster in most cases.
Transaction log redo of memory-optimized tables is now done in parallel. This bolsters faster recovery times and significantly
increases the sustained throughput of AlwaysOn availability group configuration.
Memory-optimized filegroup files can now be stored on Azure Storage. Backup/Restore of memory-optimized files on Azure
Storage is supported.
Support for computed columns in memory-optimized tables, including indexes on computed columns.
Full support for JSON functions in natively compiled modules, and in check constraints.
CROSS APPLY operator in natively compiled modules.
Performance of btree (non-clustered) index rebuild for MEMORY_OPTIMIZED tables during database recovery has been
significantly optimized. This improvement substantially reduces the database recovery time when non-clustered indexes are used.
30. Availability Groups + Failover Clustering (Linux)
Always On:
Failover Cluster Instances
and Availability Groups work
together to ensure data is
accessible despite failures
Pacemaker Cluster
Network Subnet Network Subnet
Storage
Node NodeNodeNodeNode
SQL Server
Instance
SQL Server
Instance
SQL Server
Instance
Always On SQL Server
Failover Cluster Instance
Secondary Replica Secondary Replica Secondary Replica Primary Replica
Always On Availability Group
Instance
Network Name
Pacemaker
Configuration
Pacemaker
Configuration
Pacemaker
Configuration
Pacemaker
Configuration
Pacemaker
Configuration
Instance
Network Name
Instance
Network Name
Instance
Network Name
Pacemaker cluster virtual IP
Storage Storage Shared Storage
DNS name (manual registration)
31. Always On cross-platform capabilities
Mission critical availability on any platform
• Always On availability groups
for LinuxNEW*
and Windows for
HA and DR
• Flexibility for HA architecturesNEW*
• Ultimate HA with OS-level
redundancy and failover
• Load balancing of readable
secondaries
• High Availability
• Offload Backups
• Scale BI Reporting
• Enables Testing
• Enables Migrations
32. Guarantee commits on
synchronous secondary replicas
Use REQUIRED_COPIES_TO_COMMIT with CREATE
AVAILABILITY GROUP or ALTER AVAILABILITY GROUP.
When REQUIRED_COPIES_TO_COMMIT is set to a
value higher than 0, transactions at the primary
replica databases will wait until the transaction is
committed on the specified number of synchronous
secondary replica database transaction logs.
If enough synchronous secondary replicas are not
online, write transactions to primary replica will stop
until communication with sufficient secondary replicas
resume.
Enhanced AlwaysOn Availability Groups (SQL Server 2017)
AG_Listener
New York
(Primary)
Asynchronous data
Movement
Synchronous data
Movement
Unified HA solution
Hong Kong
(Secondary)
New Jersey
(Secondary)
33. CLUSTER_TYPE
CLUSTER_TYPE Use with CREATE AVAILABILITY
GROUP. Identifies the type of server cluster manager
that manages an availability group. Can be one of the
following types:
WSFC - Windows server failover cluster. On
Windows, it is the default value for
CLUSTER_TYPE.
EXTERNAL - A cluster manager that is not
Windows server failover cluster - for example, on
Linux with Pacemaker.
NONE - No cluster manager. Used for a read-
scale availability group.
Enhanced AlwaysOn Availability Groups (SQL Server 2017)
AG_Listener
New York
(Primary)
Asynchronous data
Movement
Synchronous data
Movement
Unified HA solution
Hong Kong
(Secondary)
New Jersey
(Secondary)
39. Adaptive Query Processing
Three features to improve query performance
Enabled when the database is in SQL Server 2017 compatibility mode (140)
ALTER DATABASE current SET COMPATIBILITY_LEVEL = 140;
Adaptive Query
Processing
Interleaved
Execution
Batch Mode Memory
Grant Feedback
Batch Mode
Adaptive Joins
40. Query Processing and Cardinality Estimation
When estimates are accurate (enough), we make informed decisions around
order of operations and physical algorithm selection
CE uses a combination of statistical techniques and assumptions
During optimization, the cardinality estimation (CE) process is responsible for
estimating the number of rows processed at each step in an execution plan
41. Common reasons for incorrect cardinality estimates
Missing
statistics
Stale statistics
Inadequate
statistics sample
rate
Bad parameter
sniffing
scenarios
Out-of-model
query
constructs
• E.g. MSTVFs, table
variables, XQuery
Assumptions
not aligned with
data being
queried
• E.g. independence
vs. correlation
42. Cost of incorrect estimates
Slow query
response time due
to inefficient plans
Excessive resource
utilization (CPU,
Memory, IO)
Spills to disk
Reduced
throughput and
concurrency
T-SQL refactoring
to work around off-
model statements
43. Interleaved Execution
Pre 2017
2017+
100 rows guessed
for MSTVFs
MSTVF
identified
500k rows
assumed
Performance
issues if skewed
Execute
MSTVF
Good
performance
Problem: Multi-statement
table valued functions
(MSTVFs) are treated as a
black box by QP and we use a
fixed optimization guess
Interleaved Execution will
materialize row counts for
multi-statement table valued
functions (MSTVFs)
Downstream operations will
benefit from the corrected
MSTVF cardinality estimate
44. Batch Mode Memory Grant Feedback
Problem: Queries can spill to
disk or take too much memory
based on poor cardinality
estimates
Memory Grant Feedback
(MGF) will adjust memory
grants based on execution
feedback
MGF will remove spills and
improve concurrency for
repeating queries
45. Batch Mode Adaptive Joins
Problem: If cardinality
estimates are skewed, we
may choose an inappropriate
join algorithm
Batch Mode Adaptive Joins
(AJ) will defer the choice of
hash join or nested loop until
after the first join input has
been scanned
AJ uses nested loop for small
inputs, hash joins for large
inputs
Build input
Adaptive
threshold
Hash join
Nested loop
Yes
No
47. A graph is collection of Nodes and Edges
Undirected Graph
Directed Graph
Weighted Graph
Property Graph
What is a Graph?
Node
Person Person
48. Typical Scenarios for Graph Databases
Hierarchical or interconnected
data, entities with multiple
parents.
Analyze interconnected data,
materialize new information
from existing facts. Identify
non-obvious connections
Complex many-to-many
relationships. Organically
grow connections as the
business evolves.
A
49. Introducing SQL Server Graph
A collection of node and edge
tables in the database
Language Extensions
DDL Extensions – create node and edge
tables
DML Extensions – SELECT - T-SQL MATCH
clause to support pattern matching and
traversals; DELETE, UPDATE, and INSERT
support graph tables
Graph support is integrated into the
SQL Server ecosystem
Database
Contains
Graph
isCollectionOf
Node table
has
Properties
Edge table
may or may
not have
Properties
Node Table(s)
Edges connect
Nodes
Edge Table(s)
50. DDL Extensions
• Create node and edge tables
• Properties associated with
nodes and edges
CREATE TABLE Product (ID INTEGER
PRIMARY KEY, name VARCHAR(100)) AS
NODE;
CREATE TABLE Supplier (ID INTEGER
PRIMARY KEY, name VARCHAR(100)) AS
NODE;
CREATE TABLE hasInventory AS EDGE;
CREATE TABLE located_at(address
varchar(100)) AS EDGE;
51. DML Extensions
Multi-hop navigation and join-free
pattern matching using the MATCH
predicate:
SELECT Prod.name as ProductName,
Sup.name as SupplierName
FROM Product Prod, Supplier Sup,
hasInventory hasIn,
located_at supp_loc,
Customer Cus,
located_at cust_loc,
orders, location loc
WHERE
MATCH(
cus-(orders)->Prod<-(hasIn)-Sup
AND
cus-(cust_loc)->location<-(supp_loc)-Sup
) ;
53. T-SQL TRIM
TRIM ( [ characters FROM ] string )
Removes the space character (char(32)) or other specified characters from the start or end of a string.
SELECT TRIM (' test ') AS Result ;
Result
-----------
test
Removing the space character from both sides of string
(equivalent to LTRIM(RTRIM(string)))
SELECT TRIM( '.,! ' FROM '# test .') AS Result;
Result
---------------
# test
Removes specified characters from both sides of string
(Trimming multiple characters)
54. T-SQL CONCAT_WS
CONCAT_WS ( separator, argument1, argument2 [, argumentN]…)
Concatenates a variable number of arguments with a delimiter specified in the first argument.
SELECT CONCAT_WS( ' - ','one','two','three','four') AS Result ;
Result
------------------------
one - two - three - four
Concatenating with a delimiter
SELECT CONCAT_WS( ' - ','one',NULL,'two',NULL,'three',NULL,'four') AS Result ;
Result
---------------------------------
one - two - three - four
Concatenation ignores NULL
55. T-SQL TRANSLATE
TRANSLATE ( inputString, characters, translations)
Returns the string provided as a first argument after some characters specified in the second
argument are translated into a destination set of characters.
SELECT TRANSLATE('2*[3+4]/{7-2}', '[]{}', '()()') AS Result ;
Result
-------------
2*(3+4)/(7-2)
Replace square and curly braces with regular braces
SELECT TRANSLATE('[137.4, 72.3]' , '[,]', '( )') AS Point, TRANSLATE('(137.4 72.3)' , '( )', '[,]') AS Coordinates ;
Point Coordinates
------------- -------------
(137.4 72.3) [137.4,72.3]
Convert GeoJSON points into WKT
56. T-SQL STRING_AGG
STRING_AGG ( expression, separator ) [ <order_clause> ]
<order_clause> ::=
WITHIN GROUP ( ORDER BY <order_by_expression_list> [ ASC | DESC ] )
Concatenates the values of string expressions and places separator values between them. The
separator is not added at the end of string.
SELECT STRING_AGG ( ISNULL(FirstName,'N/A'), ',') AS csv FROM Person.Person ;
csv
-----------------------------------------------------------------------------------------------------------------------------
Syed,Catherine,Kim,Kim,Kim,Hazem,Sam,Humberto,Gustavo,Pilar,Pilar,Aaron,Adam,Alex,Alexandra,Allison,Amanda,Amber,Andrea,Angel
Generate list of names separated with comma (without NULL values)
SELECT town, STRING_AGG (email, ';') WITHIN GROUP (ORDER BY email ASC) AS emails FROM dbo.Employee GROUP BY town ;
town emails
------- ---------------------------------------------------------------------------------
Seattle catherine0@adventure-works.com;kim2@adventure-works.com;syed0@adventure-works.com
LA hazem0@adventure-works.com;sam1@adventure-works.com
Generate a sorted list of emails per towns
57. T-SQL BULK INSERT / OPENROWSET(BULK…)
[ [ , ] FORMAT = 'CSV' ]
[ [ , ] FIELDQUOTE = 'quote_characters']
Additional options added that provide support for CSV format data files
Data files and format files can now be loaded from Azure Blob Storage
59. Data stored as columns
SQL Server performance features: Columnstore
Columnstore
A technology for storing, retrieving, and
managing data by using a columnar data
format called a columnstore. You can use
columnstore indexes for real-time analytics
on your operational workload.
Key benefits
Provides a very high level of data
compression, typically 10x, to reduce your
data warehouse storage cost significantly.
Indexing on a column with repeated values
vastly improves performance for analytics.
Improved performance:
More data fits in memory
Batch-mode execution
60. ColumnStore Index Enhancements (SQL Server 2017)
Clustered Columnstore Indexes now support LOB columns (nvarchar(max), varchar(max), varbinary(max)).
Online non-clustered columnstore index build and rebuild support added
62. Resumable Online Indexing
With Resumable Online Index Rebuild you can resume a paused index rebuild
operation from where the rebuild operation was paused rather than having to
restart the operation at the beginning. In addition, this feature rebuilds indexes
using only a small amount of log space.
• Resume an index rebuild operation after an index rebuild failure, such as after a database failover or after
running out of disk space. There is no need to restart the operation from the beginning. This can save a
significant amount of time when rebuilding indexes for large tables.
• Pause an ongoing index rebuild operation and resume it later – for example, to temporarily free up system
resources to execute a high priority. Instead of aborting the index rebuild process, you can pause the index
rebuild operation and resume it later without losing prior progress.
• Rebuild large indexes without using a lot of log space and have a long-running transaction that blocks other
maintenance activities. This helps log truncation and avoid out-of-log errors that are possible for long-running
index rebuild operations.
63. Using Resumable Online Index Rebuild
Start a resumable online index rebuild
ALTER INDEX test_idx on test_table REBUILD WITH (ONLINE=ON, RESUMABLE=ON) ;
Pause a resumable online index rebuild
ALTER INDEX test_idx on test_table PAUSE ;
Resume a paused online index rebuild
ALTER INDEX test_idx on test_table RESUME ;
Abort a resumable online index rebuild (which is running or paused)
ALTER INDEX test_idx on test_table ABORT ;
View metadata about resumable online index operations
SELECT * FROM sys.index_resumable_operations ;
65. Upgrade and migration tools
Data Migration Assistant (DMA)
• Upgrade from previous version of SQL Server (on-premises or SQL Server
2017 in Azure VM)
SQL Server Migration Assistant
• Migrate from Oracle, MySQL, SAP ASE, DB2, or Access to SQL Server 2017
(on-premises or SQL Server 2017 in Azure VM)
Azure Database Migration Service
• Migrate from SQL Server, Oracle, or MySQL to Azure SQL Database or SQL
Server 2017 in Azure VM
66. Upgrading to SQL Server 2017
In-place or side-by-side upgrade path from:
• SQL Server 2008
• SQL Server 2008 R2
• SQL Server 2012
• SQL Server 2014
• SQL Server 2016
Side-by-side upgrade path from:
• SQL Server 2005
Use Data Migration Assistant to prepare for migration
67. Legacy SQL Server instance
DMA: Assess and upgrade schema
1. Assess and identify issues
2. Fix issues
3. Upgrade database
Data Migration Assistant
SQL Server 2017
69. Migrating to SQL Server 2017 from other platforms
Identify apps
For migration
Use migration tools
and partners
Deploy to
production
SQL Server
Migration Assistant
Global partner
ecosystem
AND
SQL Server 2017
on Windows
SQL Server 2017
on Linux
OR
70. Migration Assistant
Database and application migration process
• Database connectivity
• User Login and Permission
• Performance Tuning
• Database Discovery
• Architecture requirements
• (HADR, performance, locale, maintenance, dependencies, etc.)
• Migration Assessment
• Complexity, Effort, Risk
• Schema Conversion
• Data Migration
• Embedded SQL Statements
• ETL and Batch
• System and DB Interfaces
71. SQL Server Migration Assistant (SSMA)
Automates and simplifies all phases of database migration
Assess migration complexityMigration Analyzer
Convert schema and business logicSchema Converter
Migrate dataData Migrator
Supports migration from DB2, Oracle, SAP ASE, MySQL, or Access
to SQL Server
Validate converted database codeMigration Tester
72. Using SQL Server Migration Assistant (SSMA)
SSMA: Automates components of database migrations to SQL Server
DB2, Oracle, Sybase, Access, and MySQL analyzers are available
Assess the
migration project
Migrate schema
and business logic
Migrate data
Convert the application
Test, integrate,
and deploy
SSMA migration analyzer
SSMA data migrator
SSMA schema converter
73. Azure solution paths
Do not have to manage any VMs, OS or database software,
including upgrades, high availability, and backups.
Highly customized system to address the application’s specific
performance and availability requirements.
75. Legacy SQL Server instance
DMA: Assess and migrate schema
1. Assess and identify issues
2. Fix issues
3. Convert and
deploy schema
DMA
76. Oracle SQL
SQL DB
Azure Database Migration Service
Accelerating your journey to the cloud
Streamline database migration to Azure SQL
Database (PaaS)
Managed service platform for migrating databases
Migrate SQL Server & 3rd party databases to
Azure SQL Database
77. S E A M L E S S C LO U D
I N T E G R AT I O N
Easy lift-and-shift migration
Azure SQL Database Managed
Instance private preview
facilitates lift and shift migration from on-
premises SQL Server to cloud
Azure Hybrid Benefit for SQL
Server
maximizes current on-premises license
investments to facilitate migration
Database Migration Service
(DMS) private preview
provides seamless and reliable migration at scale
with minimal downtime
Most consistent data platform
Database Migration
Ser vice (DMS)
Azure SQL Database
Managed Instance
Azure Hybrid Benefit
(AHB) for SQL Ser ver
79. SQL Server Editions
SQL Server edition Definition
Enterprise
The premium offering, SQL Server Enterprise edition delivers comprehensive high-end datacenter capabilities with
blazing-fast performance, unlimited virtualization, and end-to-end business intelligence — enabling high service levels
for mission-critical workloads and end user access to data insights.
Standard
SQL Server Standard edition delivers basic data management and business intelligence database for departments and
small organizations to run their applications and supports common development tools for on-premise and cloud —
enabling effective database management with minimal IT resources.
Web
SQL Server Web edition is a low total-cost-of-ownership option for Web hosters and Web VAPs to provide scalability,
affordability, and manageability capabilities for small to large scale Web properties.
Developer
SQL Server Developer edition lets developers build any kind of application on top of SQL Server. It includes all the
functionality of Enterprise edition, but is licensed for use as a development and test system, not as a production server.
SQL Server Developer is an ideal choice for people who build
SQL Server and test applications.
Express
Express edition is the entry-level, free database and is ideal for learning and building desktop and small server data-
driven applications. It is the best choice for independent software vendors, developers, and hobbyists building client
applications. If you need more advanced database features, SQL Server Express can be seamlessly upgraded to other
higher end versions of SQL Server. SQL Server Express LocalDB, a lightweight version of Express that has all of its
programmability features, yet runs in user mode and has a fast, zero-configuration installation and a short list of
prerequisites.
80. Capacity Limits by Edition
Feature Enterprise Standard Web Express
Maximum compute capacity used by a
single instance - SQL Server Database
Engine
Operating system maximum
Limited to lesser of 4 sockets or 24
cores
Limited to lesser of 4 sockets or 16
cores
Limited to lesser of 1 socket or 4
cores
Maximum compute capacity used by a
single instance - Analysis Services or
Reporting Services
Operating system maximum
Limited to lesser of 4 sockets or 24
cores
Limited to lesser of 4 sockets or 16
cores
Limited to lesser of 1 socket or 4
cores
Maximum memory for buffer pool per
instance of SQL Server Database Engine
Operating System Maximum 128 GB 64 GB 1410 MB
Maximum memory for Columnstore
segment cache per instance of SQL
Server Database Engine
Unlimited memory 32 GB 16 GB 352 MB
Maximum memory-optimized data size
per database in SQL Server Database
Engine
Unlimited memory 32 GB 16 GB 352 MB
Maximum relational database size 524 PB 524 PB 524 PB 10 GB
81. SQL Server Features
Server components Description
SQL Server Database Engine
SQL Server Database Engine includes the Database Engine, the core service for storing,
processing, and securing data, replication, full-text search, tools for managing relational
and XML data, in database analytics integration, and Polybase integration for access to
Hadoop and other heterogeneous data sources, and the Data Quality Services (DQS)
server.
Analysis Services
Analysis Services includes the tools for creating and managing online analytical processing
(OLAP) and data mining applications.
Reporting Services
Reporting Services includes server and client components for creating, managing, and
deploying tabular, matrix, graphical, and free-form reports. Reporting Services is also an
extensible platform that you can use to develop report applications.
Integration Services
Integration Services is a set of graphical tools and programmable objects for moving,
copying, and transforming data. It also includes the Data Quality Services (DQS)
component for Integration Services.
Master Data Services
Master Data Services (MDS) is the SQL Server solution for master data management. MDS
can be configured to manage any domain (products, customers, accounts) and includes
hierarchies, granular security, transactions, data versioning, and business rules, as well as
an Add-in for Excel that can be used to manage data.
Machine Learning Services (In-Database)
Machine Learning Services (In-Database) supports distributed, scalable machine learning
solutions using enterprise data sources. SQL Server 2017 supports R and Python.
Machine Learning Server (Standalone)
Machine Learning Server (Standalone) supports deployment of distributed, scalable
machine learning solutions on multiple platforms and using multiple enterprise data
sources, including Linux, Hadoop, and Teradata. SQL Server 2017 supports R and Python.
82. Features by Edition
Some SQL Server features (and sub-features) are available
only to certain editions:
• see https://docs.microsoft.com/en-us/sql/sql-server/editions-and-
components-of-sql-server-2017 for a complete list
83. Q & A ?
James Serra, Big Data Evangelist
Email me at: JamesSerra3@gmail.com
Follow me at: @JamesSerra
Link to me at: www.linkedin.com/in/JamesSerra
Visit my blog at: JamesSerra.com (where this slide deck is posted under the “Presentations” tab)
Editor's Notes
Fluff, but point is I bring real work experience to the session
Title: SQL Server 2017 – Meeting you where you are
Any data
Access diverse data, including video, streaming, documents, relational, both external data and data internal to your org
Use PolyBase to access Hadoop big data and Azure blog storage with the simplicity of T-SQL
You can use Azure DocumentDB, a NoSQL document database service, for native JSON support and JavaScript built directly inside the database engine
Any application
Leverage the T-SQL skills of your talent base to run advanced analytics through R/Python models, and to access structured and unstructured data
Take advantage of Microsoft–created database connectivity drivers and open-source drivers that enable developers to build any application using the platforms and tools of their choice, including Python, Ruby, and Node.js
Anywhere
Flexible on-premises and cloud
Easily backup to the cloud
You can now migrate a SQL Server workload to Azure SQL DB. The parity is there and the notion that SQL Server doesn’t map to Azure SQL DB is no longer the case
Keep more historical data at your fingertips by dynamically stretching tables to the cloud with Stretch Database.
Choice of platform
Aligns to your operating system environment. Today, SQL Server is available on Windows/Windows Server, Linux, and Docker
Benefit from continued integration with Windows Server for industry-leading performance, scale and virtualization on Windows.
Note: Tux penguin image created by Larry Ewing
Source: https://docs.microsoft.com/en-us/sql/advanced-analytics/what-s-new-in-sql-server-machine-learning-services
Microsoft R Services has been renamed Microsoft Machine Learning Services
Customers need flexibility when it comes to the choice of platform, programming languages & data infrastructure to get from the most from their data.
Why? In most IT environments, platforms, technologies and skills are as diverse as they have ever been, the data platform of the future needs to you to build intelligent applications on any data, any platform, any language on premises and in the cloud.
SQL Server manages your data, across platforms, with any skills, on-premises & cloud
Our goal is to meet you where you are with on any platform, anywhere with the tools and languages of your choice.
SQL Server Database Engine now has support for Windows, Linux & Docker Containers.
Microsoft has focused on providing a Linux-native user experience for SQL Server, starting with the installation process. Installing SQL Server 2017 uses the standard package-based installation method for Linux, using yum for Fedora-based distributions, apt-get for Debian-based distributions, and zypper for SUSE Linux Enterprise Server (SLES).
Administrators can update SQL Server 2017 instances on Linux using their existing package update/upgrade processes.
The SQL Server service runs natively using systemd, and performance can be monitored through the file system as for other system daemons.
Linux file paths are supported in T-SQL statements and scripts such as defining/changing the location of data files or database backup files.
High-availability clustering can be managed with popular Linux high-availability solutions like Pacemaker and Corosync.
SQL Server command-line tools are available for Linux (including sqlcmd and bcp). MacOS versions of these tools are available as a preview at time of writing (https://blogs.technet.microsoft.com/dataplatforminsider/2017/05/16/sql-server-command-line-tools-for-macos-released/).
Existing Windows tools such as SQL Server Management Studio (SSMS), SQL Server Data Tools (SSDT), PowerShell module (sqlps) can be used to manage SQL Server on Linux from a Windows instance.
The Visual Studio Code extension for SQL Server can run on macOS, Linux, or Windows.
Microsoft offers tools such as Migration Assistant, also supported on Linux, to assist with moving existing workloads on SQL Server.
The SQL Server Docker container is built with Ubuntu 16.04 and SQL Server for Linux.
The Platform Abstraction Layer (“PAL”) is what enables SQL Server to run on Linux and Docker. The PAL is used to consolidate OS/platform specific code to enable SQL Server code to become OS agnostic.
The SQL Server team set strict requirements to ensure that functionality, performance, and scale were not compromised when deployed to Linux.
Part of what makes this possible is the integration of certain parts of MSR’s project Drawbridge. Drawbridge provided an abstraction between the underlying operating system and the application for the purposes of secure containers. Drawbridge was combined with SQL Server OS, which provided memory management, thread scheduling, and IO services, to create SQLPAL.
In short, the creation of the PAL allows the same, time-proven core code base for SQL Server to run on new environments such as Docker and Linux – as opposed to porting the Windows code base into multiple operating environments. SQL Server 2017 is not a re-write or a port – it is the same performant, scalable product Microsoft customers have relied upon for years.
For more detail, see https://blogs.technet.microsoft.com/dataplatforminsider/2016/12/16/sql-server-on-linux-how-introduction/
Because the Linux and Windows versions of SQL Server use the same code base, existing applications, drivers, frameworks, and tools will connect to and operate with SQL Server on Linux without modification.
(The screenshot shows the Visual Studio Code mssql extension in operation)
https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-develop-connectivity-libraries
The list of languages and drivers given in the slide is not exhaustive.
Any language which supports ODBC data sources should be able to use the ODBC drivers.
Any languages based on the JVM should be able to use the JDBC or ODBC drivers.
The Microsoft ODBC driver for SQL Server is available in native versions for Windows, Linux and macOS
SQL Server on Linux aims to support the core relational database engine capabilities.
In general, our goal is “it’s just SQL Server.”
95% of features just work – anything app or coding related
Some features have partial support – e.g. SQL Server Agent isn’t going to launch a windows command prompt
Some features are in progress
Some features we’ll never support – e.g. FileTable, where you have a win32 share to place files that show up in the engine – the next slide contains more details
And while it’s the same SQL Server that you may (or may not!) be used to, we’re putting in a lot of effort to be a good Linux citizen.
Callouts:
Package-based installs – no “crappy installers” like predicted on Twitter; if SQL is coming to Linux, we’re going to do it right
Failover Clustering – resilience against OS/SQL failures; automatic failover within seconds
Log Shipping – warm standbys for DR
Xplat CLI – sqlcmd lets you connect/query from any OS; bcp lets you bulk copy data;
In-Memory – 30-100x performance increases by keeping tables in-memory and using natively compiled queries
ColumnStore – why SQL Server is the leader in Gartner’s Magic Quadrant for Data Warehousing, holds the top 3 slots in performance benchmark
Always Encrypted – protect your most sensitive data even from high-privileged database administrators
AD authentication – no need to manage separate credentials for SQL Server on Linux (https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-active-directory-authentication)
SQL Server on Linux aims to support the core relational database engine capabilities.
In general, our goal is “it’s just SQL Server.”
95% of features just work – anything app or coding related
Some features have partial support – e.g. SQL Server Agent isn’t going to launch a windows command prompt
Some features are in progress
Some features we’ll never support – e.g. FileTable, where you have a win32 share to place files that show up in the engine – the next slide contains more details
And while it’s the same SQL Server that you may (or may not!) be used to, we’re putting in a lot of effort to be a good Linux citizen.
Callouts:
Package-based installs – no “crappy installers” like predicted on Twitter; if SQL is coming to Linux, we’re going to do it right
Failover Clustering – resilience against OS/SQL failures; automatic failover within seconds
Log Shipping – warm standbys for DR
Xplat CLI – sqlcmd lets you connect/query from any OS; bcp lets you bulk copy data;
In-Memory – 30-100x performance increases by keeping tables in-memory and using natively compiled queries
ColumnStore – why SQL Server is the leader in Gartner’s Magic Quadrant for Data Warehousing, holds the top 3 slots in performance benchmark
Always Encrypted – protect your most sensitive data even from high-privileged database administrators
NB: This slide is correct as at SQL Server 2017 RC2. The list of unsupported features might change in later releases - see https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-release-notes
In-Memory OLTP is available in all editions of SQL Server 2017 (including SQL Server 2017 Express Edition). This is a change introduced in SQL Server 2016 Service Pack 1, prior to which In-Memory OLTP was restricted to Enterprise Edition.
Source - https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-availability-group-overview
NB - At this point, SQL Server's integration with Pacemaker on Linux is not as coupled as with WSFC on Windows. From within SQL Server, there is no knowledge about the presence of the cluster, all orchestration is outside in and the service is controlled as a standalone instance by Pacemaker. Also, virtual network name is specific to WSFC, there is no equivalent of the same in Pacemaker. Always On dynamic management views that query cluster information will return empty rows. You can still create a listener to use it for transparent reconnection after failover, but you will have to manually register the listener name in the DNS server with the IP used to create the virtual IP resource
NB – The configuration shown in this side is not one customers typically use (one primary in FCI, one local secondary replica and 2 remote secondary replicas). The most common layout is primary replica in FCI in the primary Data Center for HA and then secondary replica in the remote Data Center (different subnet) for DR.
In this case, the multiple secondary replicas might be used for read scale-out.
[this slide has animation]
Mission critical availability on any platform
In SQL Server 2017, we are enabling the same High Availability (HA) and Disaster Recovery (DR) solutions on all platforms supported by SQL Server, including Windows and Linux. Always On Availability Groups is SQL Server’s flagship solution for HA
[click]
and DR.
[click]
SQL Server Always On availability groups can have up to eight readable secondary replicas. Each of these secondary replicas can have their own replicas as well. When daisy chained together, these readable replicas can create massive scale-out for analytics workloads. This scale-out scenario enables you to replicate around the globe, keeping read replicas close to your Business Analytics users. It’s of particularly big interest to users with large data warehouse implementations. And, it’s also easy to set up.
In fact, you can now create availability groups that span Windows and Linux nodes, and scale out your analytics workloads across multiple operating systems.
New flexibility to do HA without Windows Server fail over clustering
Fail-over clustering with Pacemaker and more through integration scripts and guides
Always On availability groups with automatic fail-over, listener, synchronous replication, read-only secondaries
Shared disk failover clusters
Backup and restore: .bak, .bacpac, and .dacpac
Log shipping
Distributed transactions for databases in availability groups: https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/transactions-always-on-availability-and-database-mirroring
Source: https://docs.microsoft.com/en-us/sql/relational-databases/automatic-tuning/automatic-tuning
Automatic tuning is a continuous monitoring and analysis process that constantly learns about the characteristic of your workload and identify potential issues and improvements.
Automatic tuning in SQL Server 2017 notifies you whenever a potential performance issue is detected, and lets you apply corrective actions, or lets the Database Engine automatically fix performance problems. Automatic tuning in SQL Server 2017 enables you to identify and fix performance issues caused by SQL plan choice regressions.
Automatic tuning in Azure SQL Database creates necessary indexes and drops unused indexes.
The Database Engine monitors the queries that are executed on the database and automatically improves performance of the workload. Database Engine has a built-in intelligence mechanism that can automatically tune and improve performance of your queries by dynamically adapting the database to your workload. There are two automatic tuning features that are available:
Automatic plan correction (available in SQL Server 2017) that identifies problematic plans and fixes SQL plan performance problems.
Automatic index management (available in Azure SQL Database) that identifies indexes that should be added in your database, and indexes that should be removed.
Constantly monitoring performance can be a hard and tedious task, especially when dealing with many databases. Managing a huge number of databases might be impossible to do efficiently. Instead of monitoring and tuning your database manually, you might consider delegating some of the monitoring and tuning actions to Database Engine using automatic tuning feature.
Automatic plan monitoring is enabled by default in SQL Server 2017.
For more information about sys.dm_db_tuning_recommendations, see https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-tuning-recommendations-transact-sql
Source - https://docs.microsoft.com/en-us/sql/relational-databases/automatic-tuning/automatic-tuning#automatic-plan-correction
In addition to detection, the Database Engine can automatically switch to the last known good plan whenever the regression is detected.
When the Database Engine applies a recommendation, it automatically monitors the performance of the forced plan. The forced plan will be retained until a recompile (for example, on next statistics or schema change) if it is better than the regressed plan. If the forced plan is not better than the regressed plan, the new plan will be unforced and the Database Engine will compile a new plan.
Enabling automatic plan choice correction
The user can enable automatic tuning per database and specify that last good plan should be forced whenever some plan change regression is detected. Automatic tuning is enabled using the following command:
ALTER DATABASE current SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = ON ); Once you turn-on this option, Database Engine will automatically force any recommendation where the estimated CPU gain is higher than 10 seconds, or the number of errors in the new plan is higher than the number of errors in the recommended plan, and verify that the forced plan is better than the current one.
Source - https://docs.microsoft.com/en-us/sql/relational-databases/performance/adaptive-query-processing
Adaptive query processing provides three techniques to improve execution plan selection.
Batch mode memory grant feedback
Batch mode adaptive joins
Interleaved execution for multi-statement table valued functions
[this slide has animation – bullets appear in sequence]
[this slide has animation]
There are many reasons that cardinality estimates might be inaccurate
[click]
When statistics do not exist
[click]
When statistics exist but are out of date, because the profile of the data has changed but the statistics have not
[click]
When statistics exist, but are not based on a representative sample of the data
[click]
When a cached query plan is optimized for a non-representative parameter value (parameter sniffing)
[click]
When the query uses constructs for which cardinality estimates cannot be directly inferred
[click]
Or when assumptions inferred from the statistics, such as correlation, are not correct
[this slide has animation – bullets appear in sequence]
Source - https://docs.microsoft.com/en-us/sql/relational-databases/performance/adaptive-query-processing#interleaved-execution-for-multi-statement-table-valued-functions
Interleaved execution changes the unidirectional boundary between the optimization and execution phases for a single-query execution and enables plans to adapt based on the revised cardinality estimates. During optimization if we encounter a candidate for interleaved execution, which is currently multi-statement table valued functions (MSTVFs), we will pause optimization, execute the applicable subtree, capture accurate cardinality estimates, and then resume optimization for downstream operations. MSTVFs have a fixed cardinality guess of “100” in SQL Server 2014 and SQL Server 2016, and “1” for earlier versions. Interleaved execution helps workload performance issues that are due to these fixed cardinality estimates associated with multi-statement table valued functions.
https://docs.microsoft.com/en-us/sql/t-sql/queries/match-sql-graph
The node names inside MATCH can be repeated. In other words, a node can be traversed an arbitrary number of times in the same query.
An edge name cannot be repeated inside MATCH.
An edge can point in either direction, but it must have an explicit direction.
OR and NOT operators are not supported in the MATCH pattern. MATCH can be combined with other expressions using AND in the WHERE clause. However, combining it with other expressions using OR or NOT is not supported.
Source - https://docs.microsoft.com/en-us/sql/t-sql/functions/concat-ws-transact-sql
CONCAT_WS indicates concatenate with separator.
Source - https://docs.microsoft.com/en-us/sql/t-sql/functions/translate-transact-sql
The first example is equivalent to (but much simpler than) the following statement using REPLACE: SELECT REPLACE(REPLACE(REPLACE(REPLACE('2*[3+4]/{7-2}','[','('), ']', ')'), '{', '('), '}', ')');
Source - https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/20/resumable-online-index-rebuild-is-in-public-preview-for-sql-server-2017-ctp-2-0/
See also https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-index-transact-sql#online-index-operations
Source - https://docs.microsoft.com/en-us/sql/database-engine/install-windows/upgrade-sql-server
Detailed upgrade path guide - https://docs.microsoft.com/en-us/sql/database-engine/install-windows/choose-a-database-engine-upgrade-method
Data Migration Assistant replaces the SQL Server Upgrade Advisor - https://docs.microsoft.com/en-us/sql/database-engine/install-windows/prepare-for-upgrade-by-running-data-migration-assistant
[this slide contains animations]
In assessments, Data Migration Assistant (DMA) automates the potentially overwhelming process of checking database schema, static objects for potential breaking changes from prior versions and also recommends performance and reliability recommendations on target server.
[click]
The first phase is to use DMA assess the legacy database and identify issues
[click]
In the second phase, issues are fixed. The first and second phases are repeated until all issues are addressed
[click]
Finally, the database is upgraded to SQL Server 2017
For more information, see https://blogs.msdn.microsoft.com/datamigration/2016/08/26/data-migration-assistant-how-to-assess-your-on-premises-sql-server-instance/
Intent
Visualize decision point: migrate to Azure SQL Database or Azure VM. Answer the question “what’s the best path for me?”
Two options for cloud migration:
- Infrastructure as a Service (IaaS) : SQL Server in Azure Virtual Machine (VM) allows you to run SQL Server inside a virtual machine in the cloud.
- Platform as a Service (PaaS). Microsoft Azure SQL Database is a relational database-as-a-service.
Both of these different cloud offerings can provide enterprise level database support, but their characteristics, capabilities and costs are quite different.
https://docs.microsoft.com/en-us/sql/ssma/sql-server-migration-assistant
SAP ASE was formerly known as SAP Sybase ASE / Sybase.
Our customers have given feedback and we wanted to acknowledge and act on it. We are addressing the migration concerns by releasing
Data Migration Assistant (DMA)—Built by SQL Engineering team with latest and greatest knowledge base of all SQL versions. This helps with assessing and planning.
Database Migration Service (DMS)—Newest offering, Azure service that helps you move your on-premises DBs to Azure at scale.
[this slide contains animations]
In assessments, Data Migration Assistant (DMA) automates the potentially overwhelming process of checking database schema, static objects for potential breaking changes from prior versions and also recommends performance and reliability recommendations on target server.
[click]
The first phase is to use DMA assess the legacy database and identify issues
[click]
In the second phase, issues are fixed. The first and second phases are repeated until all issues are addressed
[click]
Finally, the database is converted and deployed to Azure
Source : https://azure.microsoft.com/en-gb/campaigns/database-migration/
As organizations look to optimize their IT infrastructure so they have more time and resources to focus on business transformation, Microsoft is committed to accelerating these initiatives. Microsoft announced that a new migration service is coming to Azure to streamline customers’ journey to the cloud. This service will streamline the tasks required to move existing competitive and SQL Server databases to Azure. Deployment options will include Azure SQL Database and SQL Server in Azure VM.
Managed service platform for migrating databases
Azure SQL DB and Managed Instance as targets
Competitive DBs – Oracle and more
Meets enterprise non-functional requirements (NFRs) – Compliance, Security, Costs, etc.
TALK ACOUT THE TECHNICAL DETAILS:
Source ->Target.
Secure.
Feature Parity with competitors.
Zero data loss and near zero downtime migration Azure platform service.