Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San Jose

508 views

Published on

Future database administration will be highly automated. Until then, we still live in a world where extensive manual interactions are required from a skilled DBA. This will change soon as more "autonomous databases" reach maturity and enter the production environment.

Postgres-specific monitoring tools and systems continue to improve, detecting and analyzing performance issues and bottlenecks in production databases. However, while these tools can detect current issues, they require highly-experienced DBAs to analyze and recommend mitigations.

In this session, the speaker will present the initial results of the POSTGRES.AI project – Nancy CLI, a unified way to manage automated database experiments. Nancy CLI is an automated database management framework based on well-known open-source projects and incorporating major open-source tools and Postgres modules: pgBadger, pg_stat_kcache, auto_explain, pgreplay, and others.

Originally developed with the goal to simulate various SQL query use cases in various environments and collect data to train ML models, Nancy CLI turned out to be very a universal framework that can play a crucial role in CI/CD pipelines in any company.

Using Nancy CLI, casual DBAs and any engineers can easily conduct automated experiments today, either on AWS EC2 Spot instances or on any other servers. All you need is to tell Nancy which database to use, specify workload (synthetic or "real", generated based on the Postgres logs), and what you want to test – say, check how a new index will affect all most expensive query groups from pg_stat_statements, or compare various values of "default_statistics_target". All the collected information with a very high level of confidence will give you understanding, how various queries and overall Postgres performance will be affected when you apply this change to production.

Published in: Technology
  • Login to see the comments

The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San Jose

  1. 1. The Art of Database Experiments Postgres.ai Nikolay Samokhvalov email: ns@postgres.ai twitter: @postgresmen
  2. 2. About meAbout me: Postgres experience: 13+ years (databases: 17+) Past: Co-founder/CTO of 3 startups (total 30M+ users), all based on Postgres Founder of #RuPostgres (1700+ members on Meetup.com, the 2nd largest globally) Re-launched consulting practice in the SF Bay Area PostgreSQL.support Founder of Postgres.ai – a platform aimed to automate what is not yet automated Twitter: @postgresmen Email: ns@postgres.ai
  3. 3. Part 1. Why database experiments?
  4. 4. How do we find SQL performance issues?
  5. 5. prod monitoring Engineer
  6. 6. prod monitoring Engineer Automated signals (alerts), manual observations
  7. 7. prod monitoring Engineer Automated signals (alerts), manual observations
  8. 8. prod monitoring Engineer Automated signals (alerts), manual observations The most common tools for general SQL performance analysis:
  9. 9. prod monitoring Engineer ● pg_stat_statements ○ no query examples ○ no query plans Automated signals (alerts), manual observations The most common tools for general SQL performance analysis:
  10. 10. prod monitoring Engineer ● pg_stat_statements ○ no query examples ○ no query plans ● log analysis (pgBadger) ○ requires maintenance ○ not “live” ○ no query plans (unless auto_explain is ON) ○ usually, not full picture (log_min_duration_statement >> 0) Automated signals (alerts), manual observations The most common tools for general SQL performance analysis:
  11. 11. How do we solve the issues we found?
  12. 12. dev/staging m anual checks manual checks prod monitoring Engineer
  13. 13. dev/staging m anual checks manual checks Four ways to improve performance: prod monitoring Engineer
  14. 14. dev/staging m anual checks manual checks Four ways to improve performance: 1. Tune Postgres configuration prod monitoring Engineer
  15. 15. dev/staging m anual checks manual checks Four ways to improve performance: 1. Tune Postgres configuration 2. Add/remove indexes prod monitoring Engineer
  16. 16. dev/staging m anual checks manual checks Four ways to improve performance: 1. Tune Postgres configuration 2. Add/remove indexes 3. Rewrite query / redesign DB schema prod monitoring Engineer
  17. 17. dev/staging m anual checks manual checks Four ways to improve performance: 1. Tune Postgres configuration 2. Add/remove indexes 3. Rewrite query / redesign DB schema 4. Add more resources (CPU, RAM, disks) prod monitoring Engineer
  18. 18. Four ways to improve performance: 1. Tune Postgres configuration 2. Add/remove indexes 3. Rewrite query / redesign DB schema 4. Add more resources (CPU, RAM, disks)
  19. 19. Four ways to improve performance: 1. Tune Postgres configuration 2. Add/remove indexes 3. Rewrite query / redesign DB schema 4. Add more resources (CPU, RAM, disks) ~280 knobs!
  20. 20. Four ways to improve performance: 1. Tune Postgres configuration 2. Add/remove indexes 3. Rewrite query / redesign DB schema 4. Add more resources (CPU, RAM, disks) ~280 knobs! btree, hash, GiST, SP-GiST, GIN, RUM, BRIN, Bloom; unique, partial, functional, covering
  21. 21. Four ways to improve performance: 1. Tune Postgres configuration 2. Add/remove indexes 3. Rewrite query / redesign DB schema 4. Add more resources (CPU, RAM, disks) ~280 knobs! No real-workload and real-data verification (or very limited and affecting production) btree, hash, GiST, SP-GiST, GIN, RUM, BRIN, Bloom; unique, partial, functional, covering
  22. 22. Four ways to improve performance: 1. Tune Postgres configuration 2. Add/remove indexes 3. Rewrite query / redesign DB schema 4. Add more resources (CPU, RAM, disks) ~280 knobs! Sub-optimal or even far from optimal decisions No real-workload and real-data verification (or very limited and affecting production) btree, hash, GiST, SP-GiST, GIN, RUM, BRIN, Bloom; unique, partial, functional, covering
  23. 23. Four ways to improve performance: 1. Tune Postgres configuration 2. Add/remove indexes 3. Rewrite query / redesign DB schema 4. Add more resources (CPU, RAM, disks) ~280 knobs! Expert DBA skips many steps… Black magic! Sub-optimal or even far from optimal decisions No real-workload and real-data verification (or very limited and affecting production) btree, hash, GiST, SP-GiST, GIN, RUM, BRIN, Bloom; unique, partial, functional, covering
  24. 24. Real-life examples
  25. 25. A real-life example. default_statistics_target: 100 vs 1000 The idea (from articles/blogs): Default detault_statistics_target (100) is too low. Let’s change it to 1000! ...Sounds good! ...Deployed!
  26. 26. But did it really improve everything? …anything?
  27. 27. But did it really improve everything? …anything? Let’s check with Postgres.ai platform!
  28. 28. A real-life example. default_statistics_target: 100 vs 10002 years later we decided to make DB experiment to compare 100 and 1000:
  29. 29. A real-life example. default_statistics_target: 100 vs 1000 “before”: default_statistics_target = 100 “after”: default_statistics_target = 1000
  30. 30. A real-life example. default_statistics_target: 100 vs 1000 “before”: default_statistics_target = 100 In general, the new value gives better performance. Great! “after”: default_statistics_target = 1000
  31. 31. A real-life example. default_statistics_target: 100 vs 1000 “before”: default_statistics_target = 100 Scroll down, a couple of query groups… “after”: default_statistics_target = 1000
  32. 32. A real-life example. default_statistics_target: 100 vs 1000 “before”: default_statistics_target = 100 “after”: default_statistics_target = 1000 One query group became much slower after the change applied! resolved with: “ALTER TABLE/INDEX … ALTER COLUMN SET STATISTICS …“ for particular table/index and column
  33. 33. Change management in other fields
  34. 34. Interfaces (GUI, CLI, API — any): We have automated tests, CI/CD tools
  35. 35. GFDL and CC-BY-2.5, Wikipedia.org One more example...
  36. 36. GFDL and CC-BY-2.5, Wikipedia.org One more example... Aviation, aeronautics, sport cars, etc: wind tunnel
  37. 37. Experiments in other fields: 1. Are conducted in the special “lab” environment, not prod (call it “staging”) 2. With very detailed, deep analysis 3. Highly automated. Multiple experimental runs, faster, less expensive
  38. 38. Part 2. Demo
  39. 39. Demo Nancy CLI live demonstration (local + AWS) Try it yourself (any feedback welcome): https://github.com/postgres-ai/nancy
  40. 40. Postgres.ai — GUI: “Create Experiment” form
  41. 41. Postgres.ai — GUI: “Create Experiment” form Environment Object Workload Delta
  42. 42. Part 3. Nancy CLI
  43. 43. What is a Database Experiment? The Input: 1. Environment hardware, software, OS, FS, Postgres version, configuration
  44. 44. What is a Database Experiment? The Input: 1. Environment hardware, software, OS, FS, Postgres version, configuration 2. Object some database (e.g. cloned production)
  45. 45. What is a Database Experiment? The Input: 1. Environment hardware, software, OS, FS, Postgres version, configuration 2. Object some database (e.g. cloned production) 3. Workload some SQL
  46. 46. What is a Database Experiment? The Input: 1. Environment hardware, software, OS, FS, Postgres version, configuration 2. Object some database (e.g. cloned production) 3. Workload some SQL 4. Delta (optional; multiple values allowed) some Postgres configuration change, or new index
  47. 47. What is a Database Experiment? The Input: 1. Environment hardware, software, OS, FS, Postgres version, configuration 2. Object some database (e.g. cloned production) 3. Workload some SQL 4. Delta (optional; multiple values allowed) some Postgres configuration change, or new index The Output: 1. Summary better or worse, in general?
  48. 48. What is a Database Experiment? The Input: 1. Environment hardware, software, OS, FS, Postgres version, configuration 2. Object some database (e.g. cloned production) 3. Workload some SQL 4. Delta (optional; multiple values allowed) some Postgres configuration change, or new index The Output: 1. Summary better or worse, in general? 2. Artifacts any useful “telemetry” data
  49. 49. What is a Database Experiment? The Input: 1. Environment hardware, software, OS, FS, Postgres version, configuration 2. Object some database (e.g. cloned production) 3. Workload some SQL 4. Delta (optional; multiple values allowed) some Postgres configuration change, or new index The Output: 1. Summary better or worse, in general? 2. Artifacts any useful “telemetry” data 3. Per-query deep SQL analysis each query: better or worse?
  50. 50. What is a Database Experiment? The Input: 1. Environment hardware, software, OS, FS, Postgres version, configuration 2. Object some database (e.g. cloned production) 3. Workload some SQL 4. Delta (optional; multiple values allowed) some Postgres configuration change, or new index The Output: 1. Summary better or worse, in general? 2. Artifacts any useful “telemetry” data 3. Per-query deep SQL analysis each query: better or worse? + histograms: ms query group # before after
  51. 51. Is it possible with existing tools? Yes! ● Docker ● pgreplay ● pg_stat_*** ● auto_explain ● pgBadger (with JSON output) ● AWS EC2 spot instances All the blocks exist already!
  52. 52. Is it possible with existing tools? Yes! ● Docker ● pgreplay ● pg_stat_*** ● auto_explain ● pgBadger (with JSON output) ● AWS EC2 spot instances All the blocks exist already! Nancy CLI: all these blocks (and more!), integrated and automated https://github.com/postgres-ai/nancy
  53. 53. DIY automated pipeline for DB experiments and optimization How to automate database optimization using ecosystem tools and AWS? Analyze: ● pg_stat_statements ● auto_explan ● pgBadger to parse logs, use JSON output ● pg_query to group queries better ● pg_stat_kcache to analyze FS-level ops Configuration: ● annotated.conf, pgtune, pgconfigurator, postgresqlco.nf ● ottertune Suggested indexes (internal “what-if” API w/o actual execution) ● (useful: pgHero, POWA, HypoPG, dexter, plantuner) Conduct experiments: ● pgreplay to replay logs (different log_line_prefix, you need to handle it) ● EC2 spot instances Machine learning ● MADlib pgBadger: ● Grouping queries can be implemented better (see pg_query) ● Makes all queries lower cased (hurts "camelCased" names)* ● Doesn’t really support plans (auto_explain)* pgreplay and pgBadger are not friends, require different log formats *) Fixed/improved in pgBadger 10.0
  54. 54. Postgres.ai — artificial DBA/DBRE assistants AI-based cloud-friendly platform to automate database administration 54 Steve AI-based expert in capacity planning and database tuning Joe AI-based expert in query optimization and Postgres indexes Nancy AI-based expert in database experiments. Conducts experiments and presents results to human and artificial DBAs Sign up for early access: https://Postgres.ai
  55. 55. Postgres.ai — artificial DBA/DBRE assistants AI-based cloud-friendly platform to automate database administration 55 Steve AI-based expert in capacity planning and database tuning Joe AI-based expert in query optimization and Postgres indexes Nancy AI-based expert in database experiments. Conducts experiments and presents results to human and artificial DBAs Sign up for early access: https://Postgres.ai
  56. 56. Meet Nancy CLI (open source) Nancy CLI https://github.com/postgres-ai/nancy ● custom docker image (Postgres with extensions & tools) ● nancy prepare-workload to convert Postgres logs (now only .csv) to workload binary file ● nancy run to run experiments ● able to run locally (any machine) on in EC2 spot instance (low price!), including i3.*** instances (with NVMe) ● fully automated management of EC2 spots
  57. 57. What’s inside the docker container? Source: https://github.com/postgres-ai/nancy/tree/master/docker Image: https://hub.docker.com/r/postgresmen/postgres-with-stuff/ Inside: ● Ubuntu 16.04 ● Postgres (now 9.6 or10) ● postgres_dba (for manual debugging) https://github.com/NikolayS/postgres_dba ● log_min_duration_statement = 0 ● pg_stat_statements enabled, with track_io_timing = on ● auto_explain enabled (all queries, with timing) ● pgreplay ● pgBadger ● pg_stat_kcache (WIP)
  58. 58. Nancy CLI – Use Cases - Deep performance analysis (“clean run”), not affecting production - Change management - Regression testing when upgrading (software, hardware) - Verify config optimization ideas - Find optimal configuration - … and train ML models for Postgres AI!
  59. 59. Part 4. Real life challenges* ______________________________ * based on experience in 4 companies with small to mid-sized databases (up to few TBs) and OLTP workloads (up to 15k TPS)
  60. 60. How to collect workload? log_min_duration_statement = 0
  61. 61. Fears of log_min_duration_statement = 0 Evaluate the impact of log_min_duration_statement = 0 (quickly and w/o any changes!): https://gist.github.com/NikolayS/08d9b7b4845371d03e195a8d8df43408 (pay attention to comments) Only ~300 kB/s expected, ~800 log entries per second
  62. 62. However…
  63. 63. log_destination = syslog logging_collector = off or log_destination = stderr logging_collector = off or log_destination = csvlog logging_collector = on What is better for intensive logging?
  64. 64. # Postgres 9.6.10 pgbench -U postgres -j2 -c24 -T60 -rnf - <<EOF select; EOF
  65. 65. # Postgres 9.6.10 pgbench -U postgres -j2 -c24 -T60 -rnf - <<EOF select; EOF
  66. 66. # Postgres 9.6.10 pgbench -U postgres -j2 -c24 -T60 -rnf - <<EOF select; EOF
  67. 67. # Postgres 9.6.10 pgbench -U postgres -j2 -c24 -T60 -rnf - <<EOF select; EOF All-queries logging with syslog is ● 44x slower compared to “no logging” ● 33x slower compared to stderr / logging collector Be careful with syslog / journald
  68. 68. ● Forecast expected IOPS and write bandwidth ● Analyze your logging system (syslog/journald slows down everything, use STDERR, logging collector) ● If forecasted impact is too high (say, dozens of MBs), consider sampling ( "SET log_min_duration_statement = 0;" in particular, random sessions) Advices for log_min_duration_statement = 0
  69. 69. Nancy real-world examples: shared_buffers postila.ru, Real workload (5 min), 61 GB RAM (ec2 i3.2xlarge), DB size: ~500GB Various shared_buffers values shared_buffers = 15GB (25%) If we go from 25% o higher values (~80%), we improve SQL performance ~50%
  70. 70. Nancy real-world examples: seq_page_cost GitLab.com: random_page_cost = 1.5 and seq_page_cost = 1 Decision was made to switch to seq_page_cost = 4 DB experiments with Nancy CLI were made to check was this decision good in terms of performance. Results: https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4835#note_106669373 – in general, SQL performance improved ~40% WIP here, it is an open question, why is it so.
  71. 71. Nancy real-world examples: educate yourself PostgreSQL Documentation “19.5. Write Ahead Log” https://www.postgresql.org/docs/current/static/runtime-config-wal.html
  72. 72. Nancy real-world examples: educate yourself PostgreSQL Documentation “19.5. Write Ahead Log” https://www.postgresql.org/docs/current/static/runtime-config-wal.html Just conduct DB experiment with Nancy CLI, use --keep-alive 3600 and compare!
  73. 73. Thank you! Nikolay Samokhvalov ns@postgres.ai twitter: @postgresmen 73 Give it a try: https://github.com/postgres-ai/nancy
  74. 74. Part 5. Future development
  75. 75. Various ways to create an experimental database ● plain text pg_dump ○ restoration is very slow (1 vcpu utilized) ○ “logical” – physical structure is lost (cannot experiment with bloat, etc) ○ small (if compressed) ○ “snapshot” only ● pg_dump with either -Fd (“directory”) or -Fc (“custom”): ○ restoration is faster (multiple vCPUs, -j option) ○ “logical” (again: bloat, physical layout is “lost”) ○ small (because compressed) ○ “snapshot” only ● pg_basebackup + WALs, point-in-time recovery (PITR), possibly with help from WAL-E, WAL-G, pgBackRest ○ less reliable, sometimes there issues (especially if 3rd party tools involved - e.g. WAL-E & WAL-G don’t support tablespaces, there are bugs sometimes, etc) ○ “physical”: bloat and physical structure is preserved ○ not small – ~ size of the DB ○ can “walk in time” (PITR) ○ requires warm-up procedure (data is not in the memory!) ● AWS RDS: create a replica + promote it ○ no Spots :-/ ○ Lazy Load is tricky (it looks like the DB is there but it’s very slow – warm-up is needed) ● Snapshots ● Ideas for serialization ○ Stop Postgres / rsync “back” or re-copy locally on NVMe / start Postgres
  76. 76. How can we speed up experimental runs? ● Prepare the EC2 instance(s) in advance and keep it ● Prepare EBS volume(s) only (perhaps, using an instance of the different type) and keep it ready. When attached to the new instance, do warm-up ● Resource re-usage: ○ reuse docker container ○ reuse EC2 instance ○ serialize experimental runs serialization (DDL Do/Undo; VACUUM FULL; cleanup) ● Partial database snapshots (dump/restore only needed tables)
  77. 77. The future development of Nancy CLI ● Speedup DB creation ● ZFS/XFS snapshots to revert PGDATA state within seconds ● Support GCP ● More artifacts delivered: pg_stat_kcache, etc ● nancy see-report to print the summary + top-30 queries ● nancy compare-reports to print the “diff” for 2+ reports (the summary + numbers for top-30 queries, ordered by by total time based on the 1st report) ● Postgres 11 ● pgbench -i for database initialization ● pgbench to generate multithreaded synthetic workload ● Workload analysis: automatically detect “N+1 SELECT” when running workload ● Better support for the serialization of experimental runs ● Better support for multiple runs https://github.com/postgres-ai/nancy/pull/97 ○ interval with step ○ gradient descent ● Provide costs estimation (time + money) ● Rewrite in Python or Go Feedback/contributions welcome https://github.com/postgres-ai/nancy
  78. 78. Challenge: security issues Problem: a developer doesn’t have access to production. Nancy works with production data/workload. What about permissions and data protection?
  79. 79. Challenge: security issues Problem: a developer doesn’t have access to production. Nancy works with production data/workload. What about permissions and data protection? Possible solutions: ● Option 1: allow using Nancy CLI only to those who already have access production (DBAs, SREs, DBREs) ● Option 2: obfuscate data when preparing a DB clone (no universal solution yet, TODO) ● Option 3: allow access only to GUI, hide/obfuscate parameters (TODO)
  80. 80. Challenge: reliable results Issues: 1. Single runs is not enough (fluctuations) – must repeat 2. “Before”/”after” runs on 2 different machines / EC2 nodes – “not fair” comparison (defective hardware, throttling) Solutions (ideally: combination of them): ● Sequential runs ● 4+ iterations of each experimental run ● “Baseline benchmark” https://github.com/postgres-ai/nancy/issues/94

×