SlideShare a Scribd company logo
1 of 29
- Objectives
- Contents:
• Execution plan
• DMV
• SQL Profiler
• Parameter sniffing
• Table partitioning
- Q&A
- References
Security Classification: Internal
Objectives
3
• Get the idea of using execution plan
• Get the idea of using DMVs
• Be able to use SQL profilers
• Understand parameter sniffing and how to avoid
• Understand table partitioning in SQL Server
Security Classification: Internal5
An execution plan is the result of the query optimizer's
attempt to calculate the most efficient way to imple-
ment the request represented by the T-SQL query you
submitted
Security Classification: Internal6
Security Classification: Internal7
Security Classification: Internal10
• Views built on top of internal structures
• Ideal for tracking performance
Server level Component level
dm_exec_*
Execution of user code and associated
connections
dm_os_*
Memory, locking & scheduling
dm_tran_*
Transactions & isolation
dm_io_*
I/O on network and disks
dm_db_*
Databases and database object
dm_repl_*
Replication
dm_broker_*
SQL Service Broker
dm_fts_*
Full Text Search
dm_qn_*
Query Notifications
dm_clr_*
Common Language Runtime
Security Classification: Internal13
• Profiler is a GUI based tool that runs a SQL Server
trace to capture the metrics (duration, number of
reads, number of writes, the machine that ran the
query, etc...)
• A Profiler trace can consume a significant amount of
network bandwidth
Security Classification: Internal16
Parameter sniffing is the process whereby SQL Server
creates an optimal plan for a stored procedure by
using the calling parameters that are passed the first
time a stored procedure is executed.
Security Classification: Internal17
• Create SQL Server Stored Procedures using the WITH
RECOMPILE Option
• Use the SQL Server Hint OPTION (RECOMPILE)
• Use the SQL Server Hint OPTION (OPTIMIZE FOR)
• Use local variables on SQL Server Stored Procedures
• Disable SQL Server Parameter Sniffing at the Instance
Level
• Disable Parameter Sniffing for a Specific SQL Server Query
(adding the QUERYTRACEON hint to the OPTION clause)
Security Classification: Internal20
Table partitioning is a way to divide a
large table into smaller, more manageable
parts without having to create separate
tables for each part
Security Classification: Internal21
Data in a partitioned table is partitioned based on a single column, the
partition column, often called the partition key. Only one column can be
used as the partition column, but it is possible to use a computed column.
Security Classification: Internal22
The partition function defines how to partition data
based on the partition column. The partition function
does not explicitly define the partitions and which rows
are placed in each partition. Instead, the partition
function specifies boundary values, the
points between partitions.
Partition functions are created as either range left or
range right to specify whether the boundary values
belong to their left or right partitions:
• Range left means that the actual boundary value
belongs to its left partition, it is the last value in the
left partition.
• Range right means that the actual boundary value
belongs to its right partition, it is the first value in
the right partition.
Security Classification: Internal23
The partition scheme maps the logical partitions to physical filegroups. It is possible to map each
partition to its own filegroup or all partitions to one filegroup.
Security Classification: Internal24
Security Classification: Internal25
Inserts, updates and deletes on large tables can be very slow and expensive, cause locking and
blocking, and even fill up the transaction log. One of the main benefits of table partitioning is
that you can speed up loading and archiving of data by using partition switching.
Security Classification: Internal
References
Big data and Hadoop
introduction 28
1.SQL Server 2008 Query Performance Tuning Distilled
2.SQL Server 2012 Query Performance Tuning, 3rd Edition
3.SQL Server 2012 T-SQL Recipes, 3rd Edition
4.Expert Performance Indexing for SQL Server 2012
5.https://simple-talk.com
6.https://sqlservercentral.com
7.https://www.brentozar.com
8.http://www.cathrinewilhelmsen.net/
9.http://www.sqlshack.com/
T-SQL performance improvement - session 2 - Owned copy

More Related Content

What's hot

Dbms level of architecture2
Dbms level of architecture2Dbms level of architecture2
Dbms level of architecture2Akshaya Parida
 
Unit 3 final
Unit 3 finalUnit 3 final
Unit 3 finalsietkcse
 
Session 1 Tp 1
Session 1 Tp 1Session 1 Tp 1
Session 1 Tp 1githe26200
 
Fundamentals of Software Engineering
Fundamentals of Software Engineering Fundamentals of Software Engineering
Fundamentals of Software Engineering Madhar Khan Pathan
 
S D D Program Development Tools
S D D  Program  Development  ToolsS D D  Program  Development  Tools
S D D Program Development Toolsgavhays
 
SE18_Lec 07_System Modelling and Context Model
SE18_Lec 07_System Modelling and Context ModelSE18_Lec 07_System Modelling and Context Model
SE18_Lec 07_System Modelling and Context ModelAmr E. Mohamed
 

What's hot (7)

Dbms level of architecture2
Dbms level of architecture2Dbms level of architecture2
Dbms level of architecture2
 
Unit 3 final
Unit 3 finalUnit 3 final
Unit 3 final
 
Session 1 Tp 1
Session 1 Tp 1Session 1 Tp 1
Session 1 Tp 1
 
E workshop system design
E workshop system designE workshop system design
E workshop system design
 
Fundamentals of Software Engineering
Fundamentals of Software Engineering Fundamentals of Software Engineering
Fundamentals of Software Engineering
 
S D D Program Development Tools
S D D  Program  Development  ToolsS D D  Program  Development  Tools
S D D Program Development Tools
 
SE18_Lec 07_System Modelling and Context Model
SE18_Lec 07_System Modelling and Context ModelSE18_Lec 07_System Modelling and Context Model
SE18_Lec 07_System Modelling and Context Model
 

Viewers also liked

Big Data and Hadoop Introduction
 Big Data and Hadoop Introduction Big Data and Hadoop Introduction
Big Data and Hadoop IntroductionDzung Nguyen
 
R Hadoop integration
R Hadoop integrationR Hadoop integration
R Hadoop integrationDzung Nguyen
 
JIRA Service Desk + ChatOps Webinar Deck
JIRA Service Desk + ChatOps Webinar DeckJIRA Service Desk + ChatOps Webinar Deck
JIRA Service Desk + ChatOps Webinar DeckAddteq
 
Big data and Hadoop introduction
Big data and Hadoop introductionBig data and Hadoop introduction
Big data and Hadoop introductionDzung Nguyen
 
Introduction to BIg Data and Hadoop
Introduction to BIg Data and HadoopIntroduction to BIg Data and Hadoop
Introduction to BIg Data and HadoopAmir Shaikh
 
JIRA Service Desk - Tokyo, Japan Sept. 26, 2014
JIRA Service Desk - Tokyo, Japan Sept. 26, 2014JIRA Service Desk - Tokyo, Japan Sept. 26, 2014
JIRA Service Desk - Tokyo, Japan Sept. 26, 2014Adam Laskowski
 

Viewers also liked (6)

Big Data and Hadoop Introduction
 Big Data and Hadoop Introduction Big Data and Hadoop Introduction
Big Data and Hadoop Introduction
 
R Hadoop integration
R Hadoop integrationR Hadoop integration
R Hadoop integration
 
JIRA Service Desk + ChatOps Webinar Deck
JIRA Service Desk + ChatOps Webinar DeckJIRA Service Desk + ChatOps Webinar Deck
JIRA Service Desk + ChatOps Webinar Deck
 
Big data and Hadoop introduction
Big data and Hadoop introductionBig data and Hadoop introduction
Big data and Hadoop introduction
 
Introduction to BIg Data and Hadoop
Introduction to BIg Data and HadoopIntroduction to BIg Data and Hadoop
Introduction to BIg Data and Hadoop
 
JIRA Service Desk - Tokyo, Japan Sept. 26, 2014
JIRA Service Desk - Tokyo, Japan Sept. 26, 2014JIRA Service Desk - Tokyo, Japan Sept. 26, 2014
JIRA Service Desk - Tokyo, Japan Sept. 26, 2014
 

Similar to T-SQL performance improvement - session 2 - Owned copy

Hp vertica certification guide
Hp vertica certification guideHp vertica certification guide
Hp vertica certification guideneinamat
 
Hpverticacertificationguide 150322232921-conversion-gate01
Hpverticacertificationguide 150322232921-conversion-gate01Hpverticacertificationguide 150322232921-conversion-gate01
Hpverticacertificationguide 150322232921-conversion-gate01Anvith S. Upadhyaya
 
Software Eng S3 ( Software Design ).pptx
Software Eng S3 ( Software Design ).pptxSoftware Eng S3 ( Software Design ).pptx
Software Eng S3 ( Software Design ).pptxgauriVarshney8
 
Software engg. pressman_ch-10
Software engg. pressman_ch-10Software engg. pressman_ch-10
Software engg. pressman_ch-10Dhairya Joshi
 
Ch1Ch2Sept10.pdf
Ch1Ch2Sept10.pdfCh1Ch2Sept10.pdf
Ch1Ch2Sept10.pdfSamSami69
 
Notes: Verilog Part 1 - Overview - Hierarchical Modeling Concepts - Basics
Notes: Verilog Part 1 - Overview - Hierarchical Modeling Concepts - BasicsNotes: Verilog Part 1 - Overview - Hierarchical Modeling Concepts - Basics
Notes: Verilog Part 1 - Overview - Hierarchical Modeling Concepts - BasicsJay Baxi
 
Middle ware Technologies
Middle ware TechnologiesMiddle ware Technologies
Middle ware Technologiesprakashk453625
 
Unit-III(Design).pptx
Unit-III(Design).pptxUnit-III(Design).pptx
Unit-III(Design).pptxFajar Baskoro
 
Middleware Technologies
Middleware Technologies Middleware Technologies
Middleware Technologies prakashk453625
 
Administrating Your Network
Administrating Your NetworkAdministrating Your Network
Administrating Your Networkzaisahil
 
Database security and security in networks
Database security and security in networksDatabase security and security in networks
Database security and security in networksG Prachi
 
Security architecture design patterns iltam 2018 - ofer rivlin
Security architecture design patterns   iltam 2018 - ofer rivlinSecurity architecture design patterns   iltam 2018 - ofer rivlin
Security architecture design patterns iltam 2018 - ofer rivlinOfer Rivlin, CISSP
 
MOOC_PRESENTATION_FINAL_PART_1[1].pptx
MOOC_PRESENTATION_FINAL_PART_1[1].pptxMOOC_PRESENTATION_FINAL_PART_1[1].pptx
MOOC_PRESENTATION_FINAL_PART_1[1].pptxmh3473
 
2nd chapter dbms.pptx
2nd chapter dbms.pptx2nd chapter dbms.pptx
2nd chapter dbms.pptxkavitha623544
 

Similar to T-SQL performance improvement - session 2 - Owned copy (20)

Software architecture
Software architectureSoftware architecture
Software architecture
 
Hp vertica certification guide
Hp vertica certification guideHp vertica certification guide
Hp vertica certification guide
 
Hpverticacertificationguide 150322232921-conversion-gate01
Hpverticacertificationguide 150322232921-conversion-gate01Hpverticacertificationguide 150322232921-conversion-gate01
Hpverticacertificationguide 150322232921-conversion-gate01
 
Software Eng S3 ( Software Design ).pptx
Software Eng S3 ( Software Design ).pptxSoftware Eng S3 ( Software Design ).pptx
Software Eng S3 ( Software Design ).pptx
 
Software engg. pressman_ch-10
Software engg. pressman_ch-10Software engg. pressman_ch-10
Software engg. pressman_ch-10
 
Ch1Ch2Sept10.pdf
Ch1Ch2Sept10.pdfCh1Ch2Sept10.pdf
Ch1Ch2Sept10.pdf
 
Notes: Verilog Part 1 - Overview - Hierarchical Modeling Concepts - Basics
Notes: Verilog Part 1 - Overview - Hierarchical Modeling Concepts - BasicsNotes: Verilog Part 1 - Overview - Hierarchical Modeling Concepts - Basics
Notes: Verilog Part 1 - Overview - Hierarchical Modeling Concepts - Basics
 
Advance DBMS
Advance DBMSAdvance DBMS
Advance DBMS
 
Presentation2
Presentation2Presentation2
Presentation2
 
Presentation2
Presentation2Presentation2
Presentation2
 
Middle ware Technologies
Middle ware TechnologiesMiddle ware Technologies
Middle ware Technologies
 
Unit-III(Design).pptx
Unit-III(Design).pptxUnit-III(Design).pptx
Unit-III(Design).pptx
 
DBMS ppts unit1.pptx
DBMS ppts  unit1.pptxDBMS ppts  unit1.pptx
DBMS ppts unit1.pptx
 
Middleware Technologies
Middleware Technologies Middleware Technologies
Middleware Technologies
 
Administrating Your Network
Administrating Your NetworkAdministrating Your Network
Administrating Your Network
 
Database security and security in networks
Database security and security in networksDatabase security and security in networks
Database security and security in networks
 
Security architecture design patterns iltam 2018 - ofer rivlin
Security architecture design patterns   iltam 2018 - ofer rivlinSecurity architecture design patterns   iltam 2018 - ofer rivlin
Security architecture design patterns iltam 2018 - ofer rivlin
 
unit 1.pdf
unit 1.pdfunit 1.pdf
unit 1.pdf
 
MOOC_PRESENTATION_FINAL_PART_1[1].pptx
MOOC_PRESENTATION_FINAL_PART_1[1].pptxMOOC_PRESENTATION_FINAL_PART_1[1].pptx
MOOC_PRESENTATION_FINAL_PART_1[1].pptx
 
2nd chapter dbms.pptx
2nd chapter dbms.pptx2nd chapter dbms.pptx
2nd chapter dbms.pptx
 

T-SQL performance improvement - session 2 - Owned copy

  • 1.
  • 2. - Objectives - Contents: • Execution plan • DMV • SQL Profiler • Parameter sniffing • Table partitioning - Q&A - References
  • 3. Security Classification: Internal Objectives 3 • Get the idea of using execution plan • Get the idea of using DMVs • Be able to use SQL profilers • Understand parameter sniffing and how to avoid • Understand table partitioning in SQL Server
  • 4.
  • 5. Security Classification: Internal5 An execution plan is the result of the query optimizer's attempt to calculate the most efficient way to imple- ment the request represented by the T-SQL query you submitted
  • 8.
  • 9.
  • 10. Security Classification: Internal10 • Views built on top of internal structures • Ideal for tracking performance Server level Component level dm_exec_* Execution of user code and associated connections dm_os_* Memory, locking & scheduling dm_tran_* Transactions & isolation dm_io_* I/O on network and disks dm_db_* Databases and database object dm_repl_* Replication dm_broker_* SQL Service Broker dm_fts_* Full Text Search dm_qn_* Query Notifications dm_clr_* Common Language Runtime
  • 11.
  • 12.
  • 13. Security Classification: Internal13 • Profiler is a GUI based tool that runs a SQL Server trace to capture the metrics (duration, number of reads, number of writes, the machine that ran the query, etc...) • A Profiler trace can consume a significant amount of network bandwidth
  • 14.
  • 15.
  • 16. Security Classification: Internal16 Parameter sniffing is the process whereby SQL Server creates an optimal plan for a stored procedure by using the calling parameters that are passed the first time a stored procedure is executed.
  • 17. Security Classification: Internal17 • Create SQL Server Stored Procedures using the WITH RECOMPILE Option • Use the SQL Server Hint OPTION (RECOMPILE) • Use the SQL Server Hint OPTION (OPTIMIZE FOR) • Use local variables on SQL Server Stored Procedures • Disable SQL Server Parameter Sniffing at the Instance Level • Disable Parameter Sniffing for a Specific SQL Server Query (adding the QUERYTRACEON hint to the OPTION clause)
  • 18.
  • 19.
  • 20. Security Classification: Internal20 Table partitioning is a way to divide a large table into smaller, more manageable parts without having to create separate tables for each part
  • 21. Security Classification: Internal21 Data in a partitioned table is partitioned based on a single column, the partition column, often called the partition key. Only one column can be used as the partition column, but it is possible to use a computed column.
  • 22. Security Classification: Internal22 The partition function defines how to partition data based on the partition column. The partition function does not explicitly define the partitions and which rows are placed in each partition. Instead, the partition function specifies boundary values, the points between partitions. Partition functions are created as either range left or range right to specify whether the boundary values belong to their left or right partitions: • Range left means that the actual boundary value belongs to its left partition, it is the last value in the left partition. • Range right means that the actual boundary value belongs to its right partition, it is the first value in the right partition.
  • 23. Security Classification: Internal23 The partition scheme maps the logical partitions to physical filegroups. It is possible to map each partition to its own filegroup or all partitions to one filegroup.
  • 25. Security Classification: Internal25 Inserts, updates and deletes on large tables can be very slow and expensive, cause locking and blocking, and even fill up the transaction log. One of the main benefits of table partitioning is that you can speed up loading and archiving of data by using partition switching.
  • 26.
  • 27.
  • 28. Security Classification: Internal References Big data and Hadoop introduction 28 1.SQL Server 2008 Query Performance Tuning Distilled 2.SQL Server 2012 Query Performance Tuning, 3rd Edition 3.SQL Server 2012 T-SQL Recipes, 3rd Edition 4.Expert Performance Indexing for SQL Server 2012 5.https://simple-talk.com 6.https://sqlservercentral.com 7.https://www.brentozar.com 8.http://www.cathrinewilhelmsen.net/ 9.http://www.sqlshack.com/

Editor's Notes

  1. Execution plans can tell you how a query will be executed, or how a query was executed. They are, therefore, the DBA's primary means of troubleshooting a poorly performing query. Rather than guess at why a given query is performing thousands of scans, putting your I/O through the roof, you can use the execution plan to identify the exact piece of SQL code that is causing the problem. For example, it may be scanning an entire table-worth of data when, with the proper index, it could simply backpack out only the rows you need. All this and more is displayed in the execution plan.
  2. When you submit a query to a SQL Server database, a number of processes on the server go to work on that query. The purpose of all these processes is to manage the system such that it will provide your data back to you, or store it, in as timely a manner as possible, whilst maintaining the integrity of the data. These processes are run for each and every query submitted to the system. While there are lots of different actions occurring simultaneously within SQL Server, we're going to focus on the processes around T-SQL. They break down roughly into two stages: Processes that occur in the relational engine Processes that occur in the storage engine. In the relational engine the query is parsed and then processed by the Query Optimizer , which generates an execution plan. The plan is sent (in a binary format) to the storage engine, which it then uses to retrieve or update the underlying data. The storage engine is where processes such as locking, index maintenance and transactions occur 1. Query parsing As the T-SQL arrives, it passes through a process that checks that the T-SQL is written correctly, that it's well formed. This process is known as query parsing. The output of the Parser process is a parse tree, or query tree (or even sequence tree). The parse tree represents the logical steps necessary to execute the query that has been requested. 2. The Query Optimizer The query optimizer is essentially a piece of software that "models" the way in which the database relational engine works. Using the query processor tree and the statistics it has about the data, and applying the model, it works out what it thinks will be the optimal way to execute the query – that is, it generates an execution plan. In other words, the optimizer figures out how best to implement the request represented by the T-SQL query you submitted. It decides if the data can be accessed through indexes, what types of joins to use and much more. The decisions made by the optimizer are based on what it calculates to be the cost of a given execution plan, in terms of the required CPU processing and I/O, and how fast it will execute. Hence, this is known as a cost-based plan. The optimizer will generate and evaluate many plans (unless there is already a cached plan) and, generally speaking, will choose the lowest-cost plan i.e. the plan it thinks will execute the query as fast as possible and use the least amount of resources, CPU and I/O. The calculation of the execution speed is the most important calculation and the optimizer will use a process that is more CPU-intensive if it will return results that much faster. Sometimes, the optimizer will select a less efficient plan if it thinks it will take more time to evaluate many plans than to run a less efficient plan. If you submit a very simple query – for example, a single table with no indexes and with no aggregates or calculations within the query – then rather than spend time trying to calculate the absolute optimal plan, the optimizer will simply apply a single, trivial plan to these types of queries. If the query is non-trivial, the optimizer will perform a cost-based calculation to select a plan. In order to do this, it relies on statistics that are maintained by SQL Server. Statistics are collected on columns and indexes within the database, and describe the data distribution and the uniqueness, or selectivity of the data. The information that makes up statistics is represented by a histogram, a tabulation of counts of the occurrence of a particular value, taken from 200 data points evenly distributed across the data. It's this "data about the data" that provides the information necessary for the optimizer to make its calculations. If statistics exist for a relevant column or index, then the optimizer will use them in its calculations. Statistics, by default, are created and updated automatically within the system for all indexes or for any column used as a predicate, as part of a WHERE clause or JOIN ON clause. Table variables do not ever have statistics generated on them, so they are always assumed by the optimizer to have a single row, regardless of their actual size. Temporary tables do have statistics generated on them and are stored in the same histogram as permanent tables, for use within the optimizer. The optimizer takes these statistics, along with the query processor tree , and heuristically determines the best plan. This means that it works through a series of plans, testing different types of join, rearranging the join order, trying different indexes, and so on, until it arrives at what it thinks will be the fastest plan. During these calculations, a number is assigned to each of the steps within the plan, representing the optimizer's estimation of the amount of time it thinks that step will take. This shows what is called the estimated cost for that step. The accumulation of costs for each step is the cost for the execution plan itself. It's important to note that the estimated cost is just that – an estimate. Given an infinite amount of time and complete, up-to-date statistics, the optimizer would find the perfect plan for executing the query. However, it attempts to calculate the best plan it can in the least amount of time possible, and is obviously limited by the quality of the statistics it has available. Therefore these cost estimations are very useful as measures, but may not precisely reflect reality. Once the optimizer arrives at an execution plan, the actual plan is created and stored in a memory space known as the plan cache – unless an identical plan already exists in the cache (more on this shortly, in the section onExecution Plan Reuse). As the optimizer generates potential plans, it compares them to previously generated plans in the cache. If it finds a match, it will use that plan. 3. Query execution Once the execution plan is generated, the action switches to the storage engine, where the query is actually executed, according to the plan.
  3. Every subsequent call to the same store procedure with the same parameters will also get an optimal plan, whereas calls with different parameter values may not always get an optimal plan.    Not all execution plans are created equal.  Execution plans are optimized based what they need to do.  The SQL Server engine looks at a query and determines the optimal strategy for execution.  It looks at what the query is doing, uses the parameter values to look at the statistics, does some calculations and eventually decides on what steps are required to resolve the query.  This is a simplified explanation of how an execution plan is created.  The important point for us is that those parameters passed are used to determine how SQL Server will process the query.   An optimal execution plan for one set of parameters might be an index scan operation, whereas another set of parameters might be better resolved using an index seek operation. 
  4. Data in a partitioned table is physically stored in groups of rows called partitions and each partition can be accessed and maintained separately. Partitioning is not visible to end users, a partitioned table behaves like one logical table when queried. An alternative to partitioned tables (for those who don’t have Enterprise Edition) is to create separate tables for each group of rows, union the tables in a view and then query the view instead of the tables. This is called a partitioned view.