SlideShare a Scribd company logo
1 of 50
The PostgreSQL Query Planner
Atri Sharma
Whats this talk based on?
● Research done by hackers like Robert Haas in
the past, some mailing list readings,
discussions with smart people etc...
Who's this talk for?
● Users of postgres, who want to know how
query planner's heuristics can affect their
queries.
A little about the PostgreSQL Query
Planner
● Quoting Bruce Momjian:
'The optimizer is the "brain" of the database,
interpreting SQL queries and determining the fastest
method of execution.'
The decision maker
● The fate of your query performance is decided
here.
Where can the problems arise in the
query planner
● Row Count Estimates
● DBA Ideas
● Planner's Blocking points (What the planner
cant do at this point of time)
Row Count Stats
● The backend essentially holds MCVs (Most
common values) and a histogram of the
distribution of the rest of the values.
● The number of MCVs stored is equal to a
value known as statistics target.
● Incorrect or outdated counts can lead to bad
selectivity estimates, leading to not so good
plans
User Settings
● PostgreSQL depends quite some on the page
values (seq_page_cost, random_page_cost)
and others. Setting them to random values will
cause the query planner to choose an
inefficient plan over an efficient alternate plan.
Planner Not-Done-Yets
● Some queries are slow in execution because
their optimization depends on a feature which
is not yet present in the query planner.
Selectivity Estimate Issues
● Arbitrary estimates give a bad value
● Cross table correlations arent taken into
consideration.
● Planner doesnt understand expressions like (a
+ 0 = a)
Settings Issues
● DBAs can use some settings to tune the query
planner.
● Those settings can be harmful if set to
incorrect values.
Planner Cost Evaluation Parameters
● seq_page_cost = 1.0
● random_page_cost = 4.0
● cpu_tuple_cost = 0.01
● cpu_index_tuple_cost = 0.005
● cpu_operator_cost = 0.0025
● These parameters are used by the planner to
estimate the costs of plans.
● random_page_cost abstracts the cost of
reading a random page and seq_page_cost
abstacts the same for a sequential read.
● In real world scenarios, setting the
random_page_cost to a lower value can
help get better plans, since modern day disks
have reduced the gap between random page
read cost and sequential page read costs.
effective_cache_size
● Sets the planner's assumption about the
effective size of the disk cache that is
available to a single query.
● Should be set to 75% of the RAM.
Cases
● Some examples of situations where the
planner decisions can lead to a bad plan.
● Some cases are not strictly related to the
query planner, but are fun in general!
WITH
● One of the classic cases.
● WITH is an optimization barrier.
Why does WITH face performance
issues?
● We have to understand how WITH is
processed by the planner.
● The query planner intentionally runs the WITH
part first. It divides the query into separate
plans, which doesnt match up with the pull
architecture of the executor.
Solution
● Use JOINS instead.
Solution
● Use JOINS instead.
Feature Alert!
● The current implementation approach for
GROUPING SETS uses CTE based
architecture. Hence, it suffers from the same
problem of divided query plans. If this problem
is solved, we could have an efficient
implementation of GROUPING SETS in
postgres, which would be very nice.
Materialized Views
● Joining a materialized view back to its parent
table is expensive, since the planner does not
know about any correlation between the two
and the two tables are dependent on each
other.
Solution
● Use intermediate tables to store the
intermediate results.
Aggregates
● A function call like Agg(distinct) does not allow
the planner to use a HashAgg node, hence
leading to slower query plans.
Solution
● Use subqueries instead.
EXISTS and NOT EXISTS
● Using EXISTS and NOT EXISTS in your
queries is a good idea and will generally lead
to better plans. Use them instead of complete
queries if you only want to test presence or
absence of specific values.
Reason
● EXISTS and NOT EXISTS allow the planner to
use semi-joins and anti-joins, respectively.
These are much efficient than full joins for just
testing the presence or absence of values.
Prepared Statements
● Prepared statements are constantly a point of
query performance lags.
● They perform much better now, though, thanks
to some recent work done.
What is bad with prepared
statements?
● Pre 9.2, user had a choice of using prepared
statements and using generic plans, or not
use them and re plan every time.
● Post 9.2, user doesnt have that restriction, but
the downside is that the user doesnt have a
control on whether a prepared statement is
replanned.
Pre 9.2
● Prepared statements are good for workloads
where the selectivity of the queries is in a
small range. Widely varying selectivity values
can lead to bad performance for prepared
statements.
Post 9.2
● The point to look at post 9.2 is if the planner is
overlooking a better alternate plan and
choosing a sub optimal plan.
Example
● select * from sometable where id = any ($1);
where $1 was a small array.
● The generic plan for this is of course trivial and
would have worked fine.
● But the planner has to estimate the costs of
the generic plan without knowing how many
elements are in the array.
Example continued
● It estimated a value of size of the array for the
generic plan which was much higher than the
actual size of the array.
● What that meant was that the generic plan
was about twice the cost of the custom plan,
even though it was the exact same plan.
Take away lesson
● You need to be wary when using prepared
statements, pre 9.2 or post 9.2.
● Prepared statements are much better than
before, but some surprise may still turn up.
Join Reordering
● The planner may reorder the joins present in a
query based on some heuristics that it thinks
will lead to better plans.
● Generally, this is a good idea.
● We should not assume that we have a better
idea about the optimal join order.
● Still, if we are inclined to do so, we have a
setting known as join_collapse_limit
Gotcha!
● We have a catch here, though.
● Setting join_collapse_limit can lead to
suboptimal plans, but faster planning time
durations.
● If you dont care as much about suboptimal
plans as much you care about faster planner
speeds, set join_collapse_limit to a lower
value.
Gemetic Query Optimizer
● The genetic query optimizer, or GEQO, is
used when the number of tables involved in a
join exceed a specific limit.
● GEQO is used when searching the entire
search space for executing the join is not
feasible.
Genetic Query Optimizer Continued
● GEQO uses genetic algorithms to generate
query plans and evaluate their fitness,
generating better plans in each successive
population.
GEQO Gotcha
● GEQO is useful for generating plans when the
number of tables is high.
● It is also useful for generating plans quickly.
● However, it can lead to suboptimal plans.
● Lower geqo_threshold if you care much about
the planner speed rather than the optimality of
the query plan.
Note about join_collapse_limit and
geqo_threshold
● Since the default value for geqo_threshold is
higher than default value of
join_collapse_limit, GEQO isnt used by default
for explicit join syntax.
Lack of planner heuristics
● Refer to 'Planner Conceptual Error when
Joining a Subquery -- Outer Query Condition
not Pulled Into Subquery' on performance list
● It tells us that the planner does not understand
that 'a = b' and (b = c OR b = d) is equivalent to
(a = c OR a = d)
● Planner lacks such heuristics. Another example
is SELECT * from foo WHERE (foo.a + 0 =
foo.a);
So, queries like...
●
SELECT *
FROM parent
INNER JOIN child ON child.parent_id = parent.id
LEFT JOIN (SELECT parent.id, parent.bin, COUNT(*) AS c
FROM parent
INNER JOIN child ON child.parent_id = parent.id
WHERE child.data > 25
GROUP BY 1, 2) agg1 ON agg1.id = parent.id AND agg1.bin = parent.bin
WHERE parent.bin IN (1, 2)
AND agg1.c >= 3;
Should be changed to...
Notice the 'bin = 1' here
● SELECT *
FROM parent
INNER JOIN child ON child.parent_id = parent.id
LEFT JOIN (SELECT parent.id, parent.bin, COUNT(*) AS c
FROM parent
INNER JOIN child ON child.parent_id = parent.id
WHERE child.data > 25
GROUP BY 1, 2) agg1 ON agg1.id = parent.id AND agg1.bin =
parent.bin
WHERE parent.bin = 1
AND agg1.c >= 3;
General Advice 1
● Use tables to store intermediate results and
process over them wherever possible. This
allows for avoiding pitfalls such as
materialized view joins, UNION pitfalls,
DISTINCT pitfalls among others.
General Advice 2
● Whenever the planner's accuracy about plan
costs seems to be reducing, run ANALYZE.
This shall update the planner statistics and
may lead to better results.
General Advice 3
● When the accuracy of planner's costs doesnt
seem right, you can consider increasing the
statistics target (No. Of MCV values stored) by
using the following command:
ALTER COLUMN <column name> SET
STATISTICS <value>
Future developments to the query
planner
● The query planner still lacks some features.
● Understanding of foreign keys, inner join
removal, more heuristics.
● Hopefully, we will have them all someday.
● CTE planning needs to be improved and done
in line with postgres executor's data pull
model.
Case Study – Adding foreign keys
understanding to planner
● In context of removal of inner joins.
● The main issue identified is that if a row is
deleted, in the time duration from the deletion
of the row and running of the FK trigger, all
plans generated and dependent on that row's
foreign key will be invalid.
Thank You!

More Related Content

Similar to Talk PGConf Eu 2013

Robert Haas Query Planning Gone Wrong Presentation @ Postgres Open
Robert Haas Query Planning Gone Wrong Presentation @ Postgres OpenRobert Haas Query Planning Gone Wrong Presentation @ Postgres Open
Robert Haas Query Planning Gone Wrong Presentation @ Postgres Open
PostgresOpen
 
Auto-Pilot for Apache Spark Using Machine Learning
Auto-Pilot for Apache Spark Using Machine LearningAuto-Pilot for Apache Spark Using Machine Learning
Auto-Pilot for Apache Spark Using Machine Learning
Databricks
 

Similar to Talk PGConf Eu 2013 (20)

Robert Haas Query Planning Gone Wrong Presentation @ Postgres Open
Robert Haas Query Planning Gone Wrong Presentation @ Postgres OpenRobert Haas Query Planning Gone Wrong Presentation @ Postgres Open
Robert Haas Query Planning Gone Wrong Presentation @ Postgres Open
 
Copy of CRICKET MATCH WIN PREDICTOR USING LOGISTIC ...
Copy of CRICKET MATCH WIN PREDICTOR USING LOGISTIC                           ...Copy of CRICKET MATCH WIN PREDICTOR USING LOGISTIC                           ...
Copy of CRICKET MATCH WIN PREDICTOR USING LOGISTIC ...
 
A brief introduction to Searn Algorithm
A brief introduction to Searn AlgorithmA brief introduction to Searn Algorithm
A brief introduction to Searn Algorithm
 
Tips for data science competitions
Tips for data science competitionsTips for data science competitions
Tips for data science competitions
 
BSSML16 L5. Summary Day 1 Sessions
BSSML16 L5. Summary Day 1 SessionsBSSML16 L5. Summary Day 1 Sessions
BSSML16 L5. Summary Day 1 Sessions
 
Supercharge your AB testing with automated causal inference - Community Works...
Supercharge your AB testing with automated causal inference - Community Works...Supercharge your AB testing with automated causal inference - Community Works...
Supercharge your AB testing with automated causal inference - Community Works...
 
Hadoop & Spark Performance tuning using Dr. Elephant
Hadoop & Spark Performance tuning using Dr. ElephantHadoop & Spark Performance tuning using Dr. Elephant
Hadoop & Spark Performance tuning using Dr. Elephant
 
Run your queries 14X faster without any investment!
Run your queries 14X faster without any investment!Run your queries 14X faster without any investment!
Run your queries 14X faster without any investment!
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
 
Linear Regression Paper Review.pptx
Linear Regression Paper Review.pptxLinear Regression Paper Review.pptx
Linear Regression Paper Review.pptx
 
Tech Talk - JPA and Query Optimization - publish
Tech Talk  -  JPA and Query Optimization - publishTech Talk  -  JPA and Query Optimization - publish
Tech Talk - JPA and Query Optimization - publish
 
Model selection and tuning at scale
Model selection and tuning at scaleModel selection and tuning at scale
Model selection and tuning at scale
 
VSSML16 LR1. Summary Day 1
VSSML16 LR1. Summary Day 1VSSML16 LR1. Summary Day 1
VSSML16 LR1. Summary Day 1
 
DevOpsDays Phoenix 2018: Using Prometheus and Grafana for Effective Service D...
DevOpsDays Phoenix 2018: Using Prometheus and Grafana for Effective Service D...DevOpsDays Phoenix 2018: Using Prometheus and Grafana for Effective Service D...
DevOpsDays Phoenix 2018: Using Prometheus and Grafana for Effective Service D...
 
Big data Clustering Algorithms And Strategies
Big data Clustering Algorithms And StrategiesBig data Clustering Algorithms And Strategies
Big data Clustering Algorithms And Strategies
 
Demystifying Xgboost
Demystifying XgboostDemystifying Xgboost
Demystifying Xgboost
 
Auto-Pilot for Apache Spark Using Machine Learning
Auto-Pilot for Apache Spark Using Machine LearningAuto-Pilot for Apache Spark Using Machine Learning
Auto-Pilot for Apache Spark Using Machine Learning
 
Enabling Presto to handle massive scale at lightning speed
Enabling Presto to handle massive scale at lightning speedEnabling Presto to handle massive scale at lightning speed
Enabling Presto to handle massive scale at lightning speed
 
Enabling presto to handle massive scale at lightning speed
Enabling presto to handle massive scale at lightning speedEnabling presto to handle massive scale at lightning speed
Enabling presto to handle massive scale at lightning speed
 
Winning data science competitions, presented by Owen Zhang
Winning data science competitions, presented by Owen ZhangWinning data science competitions, presented by Owen Zhang
Winning data science competitions, presented by Owen Zhang
 

Talk PGConf Eu 2013

  • 1. The PostgreSQL Query Planner Atri Sharma
  • 2. Whats this talk based on? ● Research done by hackers like Robert Haas in the past, some mailing list readings, discussions with smart people etc...
  • 3. Who's this talk for? ● Users of postgres, who want to know how query planner's heuristics can affect their queries.
  • 4. A little about the PostgreSQL Query Planner ● Quoting Bruce Momjian: 'The optimizer is the "brain" of the database, interpreting SQL queries and determining the fastest method of execution.'
  • 5. The decision maker ● The fate of your query performance is decided here.
  • 6. Where can the problems arise in the query planner ● Row Count Estimates ● DBA Ideas ● Planner's Blocking points (What the planner cant do at this point of time)
  • 7. Row Count Stats ● The backend essentially holds MCVs (Most common values) and a histogram of the distribution of the rest of the values. ● The number of MCVs stored is equal to a value known as statistics target. ● Incorrect or outdated counts can lead to bad selectivity estimates, leading to not so good plans
  • 8. User Settings ● PostgreSQL depends quite some on the page values (seq_page_cost, random_page_cost) and others. Setting them to random values will cause the query planner to choose an inefficient plan over an efficient alternate plan.
  • 9. Planner Not-Done-Yets ● Some queries are slow in execution because their optimization depends on a feature which is not yet present in the query planner.
  • 10. Selectivity Estimate Issues ● Arbitrary estimates give a bad value ● Cross table correlations arent taken into consideration. ● Planner doesnt understand expressions like (a + 0 = a)
  • 11. Settings Issues ● DBAs can use some settings to tune the query planner. ● Those settings can be harmful if set to incorrect values.
  • 12. Planner Cost Evaluation Parameters ● seq_page_cost = 1.0 ● random_page_cost = 4.0 ● cpu_tuple_cost = 0.01 ● cpu_index_tuple_cost = 0.005 ● cpu_operator_cost = 0.0025 ● These parameters are used by the planner to estimate the costs of plans.
  • 13. ● random_page_cost abstracts the cost of reading a random page and seq_page_cost abstacts the same for a sequential read. ● In real world scenarios, setting the random_page_cost to a lower value can help get better plans, since modern day disks have reduced the gap between random page read cost and sequential page read costs.
  • 14. effective_cache_size ● Sets the planner's assumption about the effective size of the disk cache that is available to a single query. ● Should be set to 75% of the RAM.
  • 15. Cases ● Some examples of situations where the planner decisions can lead to a bad plan. ● Some cases are not strictly related to the query planner, but are fun in general!
  • 16. WITH ● One of the classic cases. ● WITH is an optimization barrier.
  • 17. Why does WITH face performance issues? ● We have to understand how WITH is processed by the planner. ● The query planner intentionally runs the WITH part first. It divides the query into separate plans, which doesnt match up with the pull architecture of the executor.
  • 20. Feature Alert! ● The current implementation approach for GROUPING SETS uses CTE based architecture. Hence, it suffers from the same problem of divided query plans. If this problem is solved, we could have an efficient implementation of GROUPING SETS in postgres, which would be very nice.
  • 21. Materialized Views ● Joining a materialized view back to its parent table is expensive, since the planner does not know about any correlation between the two and the two tables are dependent on each other.
  • 22. Solution ● Use intermediate tables to store the intermediate results.
  • 23. Aggregates ● A function call like Agg(distinct) does not allow the planner to use a HashAgg node, hence leading to slower query plans.
  • 25. EXISTS and NOT EXISTS ● Using EXISTS and NOT EXISTS in your queries is a good idea and will generally lead to better plans. Use them instead of complete queries if you only want to test presence or absence of specific values.
  • 26. Reason ● EXISTS and NOT EXISTS allow the planner to use semi-joins and anti-joins, respectively. These are much efficient than full joins for just testing the presence or absence of values.
  • 27. Prepared Statements ● Prepared statements are constantly a point of query performance lags. ● They perform much better now, though, thanks to some recent work done.
  • 28. What is bad with prepared statements? ● Pre 9.2, user had a choice of using prepared statements and using generic plans, or not use them and re plan every time. ● Post 9.2, user doesnt have that restriction, but the downside is that the user doesnt have a control on whether a prepared statement is replanned.
  • 29. Pre 9.2 ● Prepared statements are good for workloads where the selectivity of the queries is in a small range. Widely varying selectivity values can lead to bad performance for prepared statements.
  • 30. Post 9.2 ● The point to look at post 9.2 is if the planner is overlooking a better alternate plan and choosing a sub optimal plan.
  • 31. Example ● select * from sometable where id = any ($1); where $1 was a small array. ● The generic plan for this is of course trivial and would have worked fine. ● But the planner has to estimate the costs of the generic plan without knowing how many elements are in the array.
  • 32. Example continued ● It estimated a value of size of the array for the generic plan which was much higher than the actual size of the array. ● What that meant was that the generic plan was about twice the cost of the custom plan, even though it was the exact same plan.
  • 33. Take away lesson ● You need to be wary when using prepared statements, pre 9.2 or post 9.2. ● Prepared statements are much better than before, but some surprise may still turn up.
  • 34. Join Reordering ● The planner may reorder the joins present in a query based on some heuristics that it thinks will lead to better plans. ● Generally, this is a good idea. ● We should not assume that we have a better idea about the optimal join order. ● Still, if we are inclined to do so, we have a setting known as join_collapse_limit
  • 35. Gotcha! ● We have a catch here, though. ● Setting join_collapse_limit can lead to suboptimal plans, but faster planning time durations. ● If you dont care as much about suboptimal plans as much you care about faster planner speeds, set join_collapse_limit to a lower value.
  • 36. Gemetic Query Optimizer ● The genetic query optimizer, or GEQO, is used when the number of tables involved in a join exceed a specific limit. ● GEQO is used when searching the entire search space for executing the join is not feasible.
  • 37. Genetic Query Optimizer Continued ● GEQO uses genetic algorithms to generate query plans and evaluate their fitness, generating better plans in each successive population.
  • 38. GEQO Gotcha ● GEQO is useful for generating plans when the number of tables is high. ● It is also useful for generating plans quickly. ● However, it can lead to suboptimal plans. ● Lower geqo_threshold if you care much about the planner speed rather than the optimality of the query plan.
  • 39. Note about join_collapse_limit and geqo_threshold ● Since the default value for geqo_threshold is higher than default value of join_collapse_limit, GEQO isnt used by default for explicit join syntax.
  • 40. Lack of planner heuristics ● Refer to 'Planner Conceptual Error when Joining a Subquery -- Outer Query Condition not Pulled Into Subquery' on performance list ● It tells us that the planner does not understand that 'a = b' and (b = c OR b = d) is equivalent to (a = c OR a = d) ● Planner lacks such heuristics. Another example is SELECT * from foo WHERE (foo.a + 0 = foo.a);
  • 42. ● SELECT * FROM parent INNER JOIN child ON child.parent_id = parent.id LEFT JOIN (SELECT parent.id, parent.bin, COUNT(*) AS c FROM parent INNER JOIN child ON child.parent_id = parent.id WHERE child.data > 25 GROUP BY 1, 2) agg1 ON agg1.id = parent.id AND agg1.bin = parent.bin WHERE parent.bin IN (1, 2) AND agg1.c >= 3;
  • 44. Notice the 'bin = 1' here ● SELECT * FROM parent INNER JOIN child ON child.parent_id = parent.id LEFT JOIN (SELECT parent.id, parent.bin, COUNT(*) AS c FROM parent INNER JOIN child ON child.parent_id = parent.id WHERE child.data > 25 GROUP BY 1, 2) agg1 ON agg1.id = parent.id AND agg1.bin = parent.bin WHERE parent.bin = 1 AND agg1.c >= 3;
  • 45. General Advice 1 ● Use tables to store intermediate results and process over them wherever possible. This allows for avoiding pitfalls such as materialized view joins, UNION pitfalls, DISTINCT pitfalls among others.
  • 46. General Advice 2 ● Whenever the planner's accuracy about plan costs seems to be reducing, run ANALYZE. This shall update the planner statistics and may lead to better results.
  • 47. General Advice 3 ● When the accuracy of planner's costs doesnt seem right, you can consider increasing the statistics target (No. Of MCV values stored) by using the following command: ALTER COLUMN <column name> SET STATISTICS <value>
  • 48. Future developments to the query planner ● The query planner still lacks some features. ● Understanding of foreign keys, inner join removal, more heuristics. ● Hopefully, we will have them all someday. ● CTE planning needs to be improved and done in line with postgres executor's data pull model.
  • 49. Case Study – Adding foreign keys understanding to planner ● In context of removal of inner joins. ● The main issue identified is that if a row is deleted, in the time duration from the deletion of the row and running of the FK trigger, all plans generated and dependent on that row's foreign key will be invalid.