Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Parallelism in sql server

1,647 views

Published on

In this session we will discuss about the parallelism in SQL Server. We will talk about configuration parameters, parallel execution plans, parallel operators and more. We also will talk about problems and best practices

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Parallelism in sql server

  1. 1. Parallelism in SQL ServerEnrique Catala BañulsMentor, SolidQecatala@solidq.comTwitter: @enriquecatala
  2. 2. Enrique Catala Bañuls Computer engineer Mentor at SolidQ in the relational engine team Microsoft Technical Ranger Microsoft Active Professional since 2010 Microsoft Certified Trainer
  3. 3. Our Sponsors:
  4. 4. Volunteers: They spend their FREE time to give you this event. (2 months per person) Because they are crazy. Because they want YOU to learn from the BEST IN THE WORLD. If you see a guy with “STAFF” on their back – buy them a beer, they deserve it.
  5. 5. Paulo Matos:
  6. 6. Paulo Borges:
  7. 7. João Fialho:
  8. 8. Bruno Basto:
  9. 9. Objectives of this session  Basics on parallelism  Settings to adjust parallelism  Exchange operators  Enemies of the parallelism  Best practices9 | 3/20/2013 |
  10. 10. Parallelism “Parallelism is the action of executing a single task across several CPUs” It enhances performance taking advance of newest HW configurations
  11. 11. Parallelism benefits SQL Server uses all CPU by default Generally the queries that qualify for parallelism are high IO queries
  12. 12. SMP Symmetric multiprocessing (SMP) system All the CPUs share the same main memory No hardware partitioning for memory access Typically used in smaller computers SMP architecture CPU CPU CPU CPU CPU CPU CPU CPU System bus CPU CPU F Main S Memory Memory B CPU CPU
  13. 13. NUMA Non-Uniform Memory Access Nodes connected by shared bus, cross-bar, ring Typically used in high-end computers CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU Memory Memory Memory Memory Controller Controller Controller Controller Node Controller Node Controller Node Controller Node Controller Shared Bus
  14. 14. NUMA SQL Server is NUMA aware  Automatically detects NUMA configuration Minimizes the memory latency by using local memory in each node SQL Server must be properly configured to gain the best performance in NUMA systems
  15. 15. SQL Server Execution Model SQLOS  SQLOS creates a scheduler forMemory Node each logical CPU  A scheduler is like a logicalCPU Node CPU used by SQL Server Scheduler workers  Only one worker can be executed Worker by a scheduler at the same time  The unit of work for a worker is a Task task
  16. 16. Schedulers and concurrency Pre-emptive scheduler (Windows)  Windows uses pre-emptive scheduling because of its general operating system nature  It uses a priority-driven architecture  Each thread executes in a predetermined time slice  A thread can be preempted by a higher priority thread Cooperative scheduler (SQL Server)  Each task puts itself in the waiting list every time it needs a resource  The same scheduler executes until the end  This voluntary yielding by workers prevents context switching and improves performance
  17. 17. Objectives of this session  Basics on parallelism  Settings to adjust parallelism  Exchange operators  Enemies of the parallelism  Best practices17 | 3/20/2013 |
  18. 18. Settings to adjust parallelism Hardware level  NUMA Instance level  Soft-NUMA (affinity mask)  Degree of parallelism  Cost threshold for parallelism  Max worker threads  -P parameter Connection level  Resource Governor by configuring MAXDOP Query level  MAXDOP clause  T-SQL patterns  CROSS APPLY  Functions…
  19. 19. CPU Affinity Mask• Used to set which processor(s) can be used by the SQL Server instance.• Setting a processor affinity will tie the threads to a particular processor
  20. 20. Affinity I/O Mask Used to affinitize the CPU usage to I/O operations Each I/O operation needs to be finalized  Byte checksum, number of transferred bytes, page number okay, etc.  CPU consumption Can be used to specify the lazy writer (in a new hidden scheduler) Bad Good
  21. 21. Network affinity 8000 8001 8002 8003
  22. 22. Threshold for parallelism Instance level configuration Change statistically the parallel execution  Changes the boundaries of when a serial plan should be changed to parallel plan if(best_plan_for_now.cost<1) return(best_plan_for_now) else if(MAXDOP>0 and best_plan.cost > threshold for parallelism) return(MIN(create_paralel_plan().cost, best_plan_for_now))
  23. 23. Demonstration 1 Affinity mask, cost threshold for parallelism
  24. 24. Degree of parallelism (DOP) Max degree of parallelism o Instance setting that affects the whole instance o Can be configured at resource governor´s workload level o Enforces the maximum number of CPUs that a single worker can use MAXDOP hint o Can be used at query level
  25. 25. Demonstration 2 MAXDOP
  26. 26. Objectives of this session  Basics on parallelism  Settings to adjust parallelism  Exchange operators  Enemies of the parallelism  Best practices26 | 3/20/2013 |
  27. 27. Exchange operators Operators dedicated to moving rows between one or more workers, distributing individual rows among them
  28. 28. Distribute streams operator Row distribution based on  Hash  Each row computed a hash and each thread Works only with the rows that have the same hash  Round-robin  Each row is sent to the following thread of a round-robin  Broadcast  All rows are sent to all threads  Range  Each row is sent to a thread based on a range computation over a column  Rare and used in some parallel index creation operations  Demand  Pull mode  It SENDS the row to the operator is calling  It appears on partitioned tables
  29. 29. Repartition streams operator Takes rows from multiple sources and send rows to multiple destinations Doesn´t update any row
  30. 30. Gather streams operator It takes rows from multiple sources and send to a single destination (thread) Tipically increases CXPACKETS
  31. 31. Demonstration 3 OPERATORS
  32. 32. Objectives of this session  Basics on parallelism  Settings to adjust parallelism  Exchange operators  Enemies of the parallelism  Best practices32 | 3/20/2013 |
  33. 33. Enemies of the parallelism Makes the whole plan serial  Modifying the contents of a table variable (reading is fine)  Any T-SQL scalar function  CLR scalar functions marked as performing data access (normal ones are fine)  Random intrinsic functions including OBJECT_NAME, ENCYPTBYCERT, and IDENT_CURRENT  System table access (e.g. sys.tables) Serial zones  TOP  Sequence project (e.g. ROW_NUMBER, RANK)  Multi-statement T-SQL table-valued functions  Backward range scans (forward is fine)  Global scalar aggregates  Common sub-expression spools  Recursive CTEs
  34. 34. Demonstration 4 ENEMIES OF THE PARALLELISM
  35. 35. CXPACKETSerial Parallel Serial
  36. 36. Demonstration 5 CXPACKET
  37. 37. Objectives of this session  Basics on parallelism  Settings to adjust parallelism  Exchange operators  Enemies of the parallelism  Best practices37 | 3/20/2013 |
  38. 38. Best practices Never trust the default configuration for the degree of parallelism  By default, MAXDOP = 0 As a general rule  Pure OLTP should use MAXDOP = 1  MAXDOP not to exceed the number of physical cores  If NUMA architecture, MAXDOP <= #physical_cores_numa_node wait type name wait time (ms) requests CXPACKET 786556034 128110444 LATCH_EX 255701441 155553913 ASYNC_NETWORK_IO 129888217 19083082 PAGEIOLATCH_SH 83672746 2813207 WRITELOG 70634742 48398646 SOS_SCHEDULER_YIELD 47697175 176871743
  39. 39. Best practices When to apply MAXDOP?  ALTER INDEX operations  Typically set MAXDOP = #_physical_cores When to set max degree of parallelism?  When you see high CXPACKET waits  OLTP pure systems should set its value to 1 When to set cost threshold for parallelism?  When you want to change the number of parallel operations statistically
  40. 40. Objectives of this session  Basics on parallelism  Settings to adjust parallelism  Exchange operators  Enemies of the parallelism  Best practices41 | 3/20/2013 |
  41. 41. Thank you!
  42. 42. Parallelism in SQL ServerEnrique Catala BañulsMentor, SolidQecatala@solidq.comTwitter: @enriquecatala

×