Future database administration will be highly automated. Until then, we still live in a world where extensive manual interactions are required from a skilled DBA. This will change soon as more "autonomous databases" reach maturity and enter the production environment.
Postgres-specific monitoring tools and systems continue to improve, detecting and analyzing performance issues and bottlenecks in production databases. However, while these tools can detect current issues, they require highly-experienced DBAs to analyze and recommend mitigations.
In this session, the speaker will present the initial results of the POSTGRES.AI project – Nancy CLI, a unified way to manage automated database experiments. Nancy CLI is an automated database management framework based on well-known open-source projects and incorporating major open-source tools and Postgres modules: pgBadger, pg_stat_kcache, auto_explain, pgreplay, and others.
Originally developed with the goal to simulate various SQL query use cases in various environments and collect data to train ML models, Nancy CLI turned out to be very a universal framework that can play a crucial role in CI/CD pipelines in any company.
Using Nancy CLI, casual DBAs and any engineers can easily conduct automated experiments today, either on AWS EC2 Spot instances or on any other servers. All you need is to tell Nancy which database to use, specify workload (synthetic or "real", generated based on the Postgres logs), and what you want to test – say, check how a new index will affect all most expensive query groups from pg_stat_statements, or compare various values of "default_statistics_target". All the collected information with a very high level of confidence will give you understanding, how various queries and overall Postgres performance will be affected when you apply this change to production.
Postgres Vision 2018: WAL: Everything You Want to KnowEDB
The Write-Ahead Log (WAL) in PostgreSQL is a central feature of the database and it's relied upon to achieve critical functions, like backup, replication, and others. In this presentation delivered at Postgres Vision 2018, Devrim Gündüz, Principal Systems Engineer at EnterpriseDB, explains WAL and what the average database administrator needs to know.
PostgreSQL is designed to be easily extensible. For this reason, extensions loaded into the database can function just like features that are built in. In this session, we will learn more about PostgreSQL extension framework, how are they built, look at some popular extensions, management of these extensions in your deployments.
PostgreSQL, performance for queries with groupingAlexey Bashtanov
The talk will cover PostgreSQL grouping and aggregation facilities and best practices of using them in fast and efficient manner.
In 40 minutes the audience will learn several techniques to optimise queries containing GROUP BY, DISTINCT or DISTINCT ON keywords.
Postgres Vision 2018: WAL: Everything You Want to KnowEDB
The Write-Ahead Log (WAL) in PostgreSQL is a central feature of the database and it's relied upon to achieve critical functions, like backup, replication, and others. In this presentation delivered at Postgres Vision 2018, Devrim Gündüz, Principal Systems Engineer at EnterpriseDB, explains WAL and what the average database administrator needs to know.
PostgreSQL is designed to be easily extensible. For this reason, extensions loaded into the database can function just like features that are built in. In this session, we will learn more about PostgreSQL extension framework, how are they built, look at some popular extensions, management of these extensions in your deployments.
PostgreSQL, performance for queries with groupingAlexey Bashtanov
The talk will cover PostgreSQL grouping and aggregation facilities and best practices of using them in fast and efficient manner.
In 40 minutes the audience will learn several techniques to optimise queries containing GROUP BY, DISTINCT or DISTINCT ON keywords.
This ppt was used by Devrim at pgDay Asia 2017. He talked about some important facts about WAL - Transaction Logs or xlogs in PostgreSQL. Some of these can really come handy on a bad day
PostgreSQL 10 had added built-in logical replication which tackles some of the limitations of physical replication and opens up the possibility of promising new areas of replication. In this webinar, we will introduce the concept of logical replication and demonstrate how one can configure a logical replication in no time.
Highlights include:
- Basic architecture, including the publisher and subscriber model
- Configuration, administration, and monitoring
- Limitations and future plans
In 40 minutes the audience will learn a variety of ways to make postgresql database suddenly go out of memory on a box with half a terabyte of RAM.
Developer's and DBA's best practices for preventing this will also be discussed, as well as a bit of Postgres and Linux memory management internals.
Common Java wisdom is to use PreparedStatements and Batch DML in order to achieve top performance.
It turns out one cannot just blindly follow the best practices. In order to get high throughput, you need
to understand the specifics of the database in question, and the content of the data.
In the talk we will see how proper usage of PostgreSQL protocol enables high performance operation while fetching
and storing the data. We will see how trivial application and/or JDBC driver code changes can result
in dramatic performance improvements. We will examine how server-side prepared statements should be activated,
and discuss pitfalls of using server-prepared statements.
Webinar slides: An Introduction to Performance Monitoring for PostgreSQLSeveralnines
To operate PostgreSQL efficiently, you need to have insight into database performance and make sure it is at optimal levels.
With that in mind, we dive into monitoring PostgreSQL for performance in this webinar replay.
PostgreSQL offers many metrics through various status overviews and commands, but which ones really matter to you? How do you trend and alert on them? What is the meaning behind the metrics? And what are some of the most common causes for performance problems in production?
We discuss this and more in ordinary, plain DBA language. We also have a look at some of the tools available for PostgreSQL monitoring and trending; and we’ll show you how to leverage ClusterControl’s PostgreSQL metrics, dashboards, custom alerting and other features to track and optimize the performance of your system.
AGENDA
- PostgreSQL architecture overview
- Performance problems in production
- Common causes
- Key PostgreSQL metrics and their meaning
- Tuning for performance
- Performance monitoring tools
- Impact of monitoring on performance
- How to use ClusterControl to identify performance issues
- Demo
SPEAKER
Sebastian Insausti, Support Engineer at Severalnines, has loved technology since his childhood, when he did his first computer course (Windows 3.11). And from that moment he was decided on what his profession would be. He has since built up experience with MySQL, PostgreSQL, HAProxy, WAF (ModSecurity), Linux (RedHat, CentOS, OL, Ubuntu server), Monitoring (Nagios), Networking and Virtualization (VMWare, Proxmox, Hyper-V, RHEV).
Prior to joining Severalnines, Sebastian worked as a consultant to state companies in security, database replication and high availability scenarios. He’s also a speaker and has given a few talks locally on InnoDB Cluster and MySQL Enterprise together with an Oracle team. Previous to that, he worked for a Mexican company as chief of sysadmin department as well as for a local ISP (Internet Service Provider), where he managed customers' servers and connectivity.
This webinar builds upon a related blog post by Sebastian: https://severalnines.com/blog/performance-cheat-sheet-postgresql.
PostgreSQL (or Postgres) began its life in 1986 as POSTGRES, a research project of the University of California at Berkeley.
PostgreSQL isn't just relational, it's object-relational.it's object-relational. This gives it some advantages over other open source SQL databases like MySQL, MariaDB and Firebird.
ClickHouse Monitoring 101: What to monitor and howAltinity Ltd
Webinar. Presented by Robert Hodges and Ned McClain, April 1, 2020
You are about to deploy ClickHouse into production. Congratulations! But what about monitoring? In this webinar we will introduce how to track the health of individual ClickHouse nodes as well as clusters. We'll describe available monitoring data, how to collect and store measurements, and graphical display using Grafana. We'll demo techniques and share sample Grafana dashboards that you can use for your own clusters.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
A look at what HA is and what PostgreSQL has to offer for building an open source HA solution. Covers various aspects in terms of Recovery Point Objective and Recovery Time Objective. Includes backup and restore, PITR (point in time recovery) and streaming replication concepts.
Introduction to Vacuum Freezing and XIDPGConf APAC
These are slides which were used by Masahiko Sawada of NTT, Japan for his presentation at pgDay Asia. He spoke about internals of VACCUM and XID Wraparound issue of PosgreSQL.
Nancy CLI, a unified way to manage automated database experiments. Nancy CLI is an automated database management framework based on well-known open-source projects and incorporating major open-source tools.
Using these tools, casual DBAs can conduct automated experiments today, either on AWS EC2 Spot instances or on any other servers. All you need is to tell Nancy which database to use, how to determine workloads and what you want to verify – say, check how some index will help, or compare various values of "default_statistics_target" for your database and your workload.
Everything else Nancy will do for you, in fully automated fashion, in the end presenting you detailed results for comparison.
UKOUG version of a presentation trying to establish the sensible limits of parallelism on a couple of hardware configurations. Detailed white paper is at http://oracledoug.com/px_slaves.pdf
This ppt was used by Devrim at pgDay Asia 2017. He talked about some important facts about WAL - Transaction Logs or xlogs in PostgreSQL. Some of these can really come handy on a bad day
PostgreSQL 10 had added built-in logical replication which tackles some of the limitations of physical replication and opens up the possibility of promising new areas of replication. In this webinar, we will introduce the concept of logical replication and demonstrate how one can configure a logical replication in no time.
Highlights include:
- Basic architecture, including the publisher and subscriber model
- Configuration, administration, and monitoring
- Limitations and future plans
In 40 minutes the audience will learn a variety of ways to make postgresql database suddenly go out of memory on a box with half a terabyte of RAM.
Developer's and DBA's best practices for preventing this will also be discussed, as well as a bit of Postgres and Linux memory management internals.
Common Java wisdom is to use PreparedStatements and Batch DML in order to achieve top performance.
It turns out one cannot just blindly follow the best practices. In order to get high throughput, you need
to understand the specifics of the database in question, and the content of the data.
In the talk we will see how proper usage of PostgreSQL protocol enables high performance operation while fetching
and storing the data. We will see how trivial application and/or JDBC driver code changes can result
in dramatic performance improvements. We will examine how server-side prepared statements should be activated,
and discuss pitfalls of using server-prepared statements.
Webinar slides: An Introduction to Performance Monitoring for PostgreSQLSeveralnines
To operate PostgreSQL efficiently, you need to have insight into database performance and make sure it is at optimal levels.
With that in mind, we dive into monitoring PostgreSQL for performance in this webinar replay.
PostgreSQL offers many metrics through various status overviews and commands, but which ones really matter to you? How do you trend and alert on them? What is the meaning behind the metrics? And what are some of the most common causes for performance problems in production?
We discuss this and more in ordinary, plain DBA language. We also have a look at some of the tools available for PostgreSQL monitoring and trending; and we’ll show you how to leverage ClusterControl’s PostgreSQL metrics, dashboards, custom alerting and other features to track and optimize the performance of your system.
AGENDA
- PostgreSQL architecture overview
- Performance problems in production
- Common causes
- Key PostgreSQL metrics and their meaning
- Tuning for performance
- Performance monitoring tools
- Impact of monitoring on performance
- How to use ClusterControl to identify performance issues
- Demo
SPEAKER
Sebastian Insausti, Support Engineer at Severalnines, has loved technology since his childhood, when he did his first computer course (Windows 3.11). And from that moment he was decided on what his profession would be. He has since built up experience with MySQL, PostgreSQL, HAProxy, WAF (ModSecurity), Linux (RedHat, CentOS, OL, Ubuntu server), Monitoring (Nagios), Networking and Virtualization (VMWare, Proxmox, Hyper-V, RHEV).
Prior to joining Severalnines, Sebastian worked as a consultant to state companies in security, database replication and high availability scenarios. He’s also a speaker and has given a few talks locally on InnoDB Cluster and MySQL Enterprise together with an Oracle team. Previous to that, he worked for a Mexican company as chief of sysadmin department as well as for a local ISP (Internet Service Provider), where he managed customers' servers and connectivity.
This webinar builds upon a related blog post by Sebastian: https://severalnines.com/blog/performance-cheat-sheet-postgresql.
PostgreSQL (or Postgres) began its life in 1986 as POSTGRES, a research project of the University of California at Berkeley.
PostgreSQL isn't just relational, it's object-relational.it's object-relational. This gives it some advantages over other open source SQL databases like MySQL, MariaDB and Firebird.
ClickHouse Monitoring 101: What to monitor and howAltinity Ltd
Webinar. Presented by Robert Hodges and Ned McClain, April 1, 2020
You are about to deploy ClickHouse into production. Congratulations! But what about monitoring? In this webinar we will introduce how to track the health of individual ClickHouse nodes as well as clusters. We'll describe available monitoring data, how to collect and store measurements, and graphical display using Grafana. We'll demo techniques and share sample Grafana dashboards that you can use for your own clusters.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
A look at what HA is and what PostgreSQL has to offer for building an open source HA solution. Covers various aspects in terms of Recovery Point Objective and Recovery Time Objective. Includes backup and restore, PITR (point in time recovery) and streaming replication concepts.
Introduction to Vacuum Freezing and XIDPGConf APAC
These are slides which were used by Masahiko Sawada of NTT, Japan for his presentation at pgDay Asia. He spoke about internals of VACCUM and XID Wraparound issue of PosgreSQL.
Nancy CLI, a unified way to manage automated database experiments. Nancy CLI is an automated database management framework based on well-known open-source projects and incorporating major open-source tools.
Using these tools, casual DBAs can conduct automated experiments today, either on AWS EC2 Spot instances or on any other servers. All you need is to tell Nancy which database to use, how to determine workloads and what you want to verify – say, check how some index will help, or compare various values of "default_statistics_target" for your database and your workload.
Everything else Nancy will do for you, in fully automated fashion, in the end presenting you detailed results for comparison.
UKOUG version of a presentation trying to establish the sensible limits of parallelism on a couple of hardware configurations. Detailed white paper is at http://oracledoug.com/px_slaves.pdf
Building a Complex, Real-Time Data Management ApplicationJonathan Katz
Congratulations: you've been selected to build an application that will manage whether or not the rooms for PGConf.EU are being occupied by a session!
On the surface, this sounds simple, but we will be managing the rooms of PGConf.EU, so we know that a lot of people will be accessing the system. Therefore, we need to ensure that the system can handle all of the eager users that will be flooding the PGConf.EU website checking to see what availability each of the PGConf.EU rooms has.
To do this, we will explore the following PGConf.EU features:
* Data types and their functionality, such as:
* Data/Time types
* Ranges
Indexes such as:
* GiST
* SP-Gist
* Common Table Expressions and Recursion
* Set generating functions and LATERAL queries
* Functions and the PL/PGSQL
* Triggers
* Logical decoding and streaming
We will be writing our application primary with SQL, though we will sneak in a little bit of Python and using Kafka to demonstrate the power of logical decoding.
At the end of the presentation, we will have a working application, and you will be happy knowing that you provided a wonderful user experience for all PGConf.EU attendees made possible by the innovation of PGConf.EU!
Planning For High Performance Web ApplicationYue Tian
This slide is prepared for Beijing Open Party (a monthly unconference in Beijing China). And it's covered some important points when you are building a scalable web sites. And few page of this slide is in Chinese.
Performance Tuning Cheat Sheet for MongoDBSeveralnines
Bart Oles - Severalnines AB
Database performance affects organizational performance, and we tend to look for quick fixes when under stress. But how can we better understand our database workload and factors that may cause harm to it? What are the limitations in MongoDB that could potentially impact cluster performance?
In this talk, we will show you how to identify the factors that limit database performance. We will start with the free MongoDB Cloud monitoring tools. Then we will move on to log files and queries. To be able to achieve optimal use of hardware resources, we will take a look into kernel optimization and other crucial OS settings. Finally, we will look into how to examine performance of MongoDB replication.
Monitoring Big Data Systems Done "The Simple Way" - Demi Ben-Ari - Codemotion...Codemotion
Once you start working with Big Data systems, you discover a whole bunch of problems you won’t find in monolithic systems. Monitoring all of the components becomes a big data problem itself. In the talk, we’ll mention all of the aspects that you should take into consideration when monitoring a distributed system using tools like Web Services, Spark, Cassandra, MongoDB, AWS. Not only the tools, what should you monitor about the actual data that flows in the system? We’ll cover the simplest solution with your day to day open source tools, the surprising thing, that it comes not from an Ops Guy.
Monitoring Big Data Systems "Done the simple way" - Demi Ben-Ari - Codemotion...Demi Ben-Ari
Once you start working with distributed Big Data systems, you start discovering a whole bunch of problems you won’t find in monolithic systems.
All of a sudden to monitor all of the components becomes a big data problem itself.
In the talk we’ll mention all of the aspects that you should take in consideration when monitoring a distributed system once you’re using tools like:
Web Services, Apache Spark, Cassandra, MongoDB, Amazon Web Services.
Not only the tools, what should you monitor about the actual data that flows in the system?
And we’ll cover the simplest solution with your day to day open source tools, the surprising thing, that it comes not from an Ops Guy.
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Determining the root cause of performance issues is a critical task for Operations. In this webinar, we'll show you the tools and techniques for diagnosing and tuning the performance of your MongoDB deployment. Whether you're running into problems or just want to optimize your performance, these skills will be useful.
Drilling Cyber Security Data With Apache DrillCharles Givre
This deck walks you through using Apache Drill and Apache Superset (Incubating) to explore cyber security datasets including PCAP, HTTPD log files, Syslog and more.
Similar to The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San Jose (20)
Эксперименты с Postgres в Docker и облаках — оптимизация настроек и схемы ва...Nikolay Samokhvalov
Администрирование баз данных в будущем будет полностью автоматизировано. Это уже так для базовых операций DBA: поднятие инстансов, бэкапы, управление репликацией, failover — мы наблюдаем это по бурному развитию облачных «управляемых» СУБД (AWS RDS, Google Cloud SQL и десятков игроков поменьше), работе над k8s-оператором для Postgres и MySQL в ряде компаний, внедрению внутренних RDS-like DBaaS (database-as-a-service) решений внутри крупных организаций.
Но диагностика и оптимизация производительности баз данных сегодня всё ещё очень «ручные». Например, в Postgres: находим медленную группу запросов в pg_stat_statements, ищем конкретный пример (а то и «выдумываем» его на ходу), пробуем EXPLAIN ANALYZE сначала в dev/staging-окружении, где, как правило, данных не так много, а потом на prod'е... Подбираем индекс, убеждаемся, что он ускоряет (вроде бы) один SQL-запрос и — всё, отправляем в production. Метод «чик-чик и в production» должен остаться в прошлом! Как остались в прошлом развёртывание и настройка серверов и сервисов вручную.
Nancy CLI (https://github.com/postgres-ai/nancy) – открытый фреймворк для проведения экспериментов над базами данных PostgreSQL, позволяющий любому инженеру наладить системный подход к анализу и оптимизации производительности БД. Nancy поддерживает проведение экспериментов локально (на любом сервере) и удалённо на дешёвых высокопроизводительных спот-инстансах AWS EC2.
Без каких-либо специальных знаний, используя Nancy CLI, любой инженер может теперь:
- собрать подробную информацию о поведении «SQL-запросов с прода» на «клоне прода», но «не трогая прод» с целью выявления узких мест (на «проде» под нагрузкой включать обширную диагностику неразумно, а иногда и невозможно);
- проверить, как тот или иной индекс влияет на производительность SQL (в том числе, насколько он замедлит UPDATE'ы);
- подобрать оптимальные параметры настройки Postgres'а (пример: запустить в облаке проверку 100 вариантов default_statistics_target с подробным исследованием эффекта и анализом для каждой группы SQL-запросов);
- сравнить 2+ прогонов моделированной нагрузки на клоне реальной БД в различных условиях (разное оборудование, разные версии Postgres, разные настройки, разные наборы индексов).
В докладе мы также обсудим конкретные примеры внедрения метода автоматизации экспериментов над БД и Nancy CLI в ряд проектов различных компаний (БД до 2ТБ, hybrid workload, до 15k TPS) и трудности, которые пришлось преодолеть на пути:
1. Включение полного логирования запросов: когда это просто страх, а когда это действительно серьёзный стресс для сервера? Как быть, если диски «не тянут» полное логирование?
2. Вопросы безопасности: нужно ли давать доступ к экспериментальным узлам всем разработчикам или можно обойтись без этого? Обфускировать ли данные?
3. Как убедиться, что результаты эксперимента достоверны?
4. Как проводить эксперименты над терабайтной базой данных быстро?
5. Стоит ли включать Nancy в CI/CD-конвейер?
Промышленный подход к тюнингу PostgreSQL: эксперименты над базами данныхNikolay Samokhvalov
Shared_buffers = 25% – это много или мало? Или в самый раз? Как понять, подходит ли эта – довольно устаревшая – рекомендация в вашем конкретном случае?
Пришло время подойти к вопросу подбора параметров postgresql.conf "по-взрослому". Не с помощью слепых "автотюнеров" или устаревших советов из статей и блогов, а на основе:
строго выверенных экспериментов на БД, производимых автоматизированно, в больших количествах и в условиях, максимально приближенных к "боевым",
глубокого понимания особенностей работы СУБД и ОС.
Используя Nancy CLI (https://gitlab.com/postgres.ai/nancy), мы рассмотрим конкретный пример – пресловутые shared_buffers – в разных ситуациях, в разных проектах и попробуем разобраться, как же подобрать оптимальную настройку для нашей инфраструктуры, БД и нагрузки.
https://pgconf.ru/2019/242809
#RuPostgresLive 4: как писать и читать сложные SQL-запросыNikolay Samokhvalov
Онлайн-опросы неизменно показывают — всех нас очень интересуют две вещи: а) как писать наиболее эффективные SQL-запросы, б) как «читать» такие запросы, а точнее, как понимать, что именно делает или будет делать СУБД при их выполнении.
Эти две неразрывно связанные друг с другом темы чрезвычайно обширны, SQL-искусству можно (и нужно) учиться годами. Во время нашей очередной встречи в прямом эфире мы затронем некоторые аспекты обеих.
ЧАСТЬ 1: EXPLAIN
Алексей Ермаков. Как читать и интерпретировать вывод команды EXPLAIN
Команда EXPLAIN — основной инструмент анализа запросов, позволяющий разобраться, каким образом запрос будет выполняться и как можно его ускорить. Для сложных запросов вывод может быть довольно громоздким и его становится сложно читать. Я расскажу, из каких частей состоит план запроса, на какие «маркеры» в нём следует обращать внимание в первую очередь и как на это реагировать.
ЧАСТЬ 2: ADVANCED SQL
Николай Самохвалов. SQL современный и «продвинутый»
«Я не волшебник, я только учусь». Продвинутому SQL нас постоянно учат такие видные гуру как Markus Winand и Макс Богук. Рекурсивные CTE, LATERAL JOIN, виртуозная работа с массивами и строками, window functions и прочие модные штучки, которые помогут вам в дрессировке вашего Постгреса, — я постараюсь сделать хороший обзор, а если вдруг тема покажется интересной, то в следующих сеансах группового Постгреса мы обязательно пригласим настоящих гуру :)
#RuPostgresLive 4: как писать и читать сложные SQL-запросыNikolay Samokhvalov
Онлайн-опросы неизменно показывают — всех нас очень интересуют две вещи: а) как писать наиболее эффективные SQL-запросы, б) как «читать» такие запросы, а точнее, как понимать, что именно делает или будет делать СУБД при их выполнении.
Эти две неразрывно связанные друг с другом темы чрезвычайно обширны, SQL-искусству можно (и нужно) учиться годами. Во время нашей очередной встречи в прямом эфире мы затронем некоторые аспекты обеих.
ЧАСТЬ 1: EXPLAIN
Алексей Ермаков. Как читать и интерпретировать вывод команды EXPLAIN
Команда EXPLAIN — основной инструмент анализа запросов, позволяющий разобраться, каким образом запрос будет выполняться и как можно его ускорить. Для сложных запросов вывод может быть довольно громоздким и его становится сложно читать. Я расскажу, из каких частей состоит план запроса, на какие «маркеры» в нём следует обращать внимание в первую очередь и как на это реагировать.
ЧАСТЬ 2: ADVANCED SQL
Николай Самохвалов. SQL современный и «продвинутый»
«Я не волшебник, я только учусь». Продвинутому SQL нас постоянно учат такие видные гуру как Markus Winand и Макс Богук. Рекурсивные CTE, LATERAL JOIN, виртуозная работа с массивами и строками, window functions и прочие модные штучки, которые помогут вам в дрессировке вашего Постгреса, — я постараюсь сделать хороший обзор, а если вдруг тема покажется интересной, то в следующих сеансах группового Постгреса мы обязательно пригласим настоящих гуру :)
Database First! О распространённых ошибках использования РСУБДNikolay Samokhvalov
Мы обсудим несколько фундаментальных ситуаций использования РСУБД (каждая из которых неоднократно встречалась автору), попутно разбирая возможные ошибки:
- элементарная модификация данных;
- работа с датой, временем и временными зонами;
- проверка ограничений целостности;
- очередь заданий;
- пакетная работа с данными (например, удаление пачки записей в таблице);
- полнотекстовый поиск;
- относительно новые задачи (создание API, machine learning).
#RuPostges в Yandex, эпизод 3. Что же нового в PostgreSQL 9.6Nikolay Samokhvalov
Первый релиз-кандидат версии 9.6 вышел 1 сентября, а это значит, что совсем скоро будет полноценный релиз. Все вокруг уже успели обсудить новинки, и теперь уже стыдно ничего не знать о таких вещах, как параллелизация выполнения запросов, pushdown для FDW, мониторинг waitlocks, полнотекстовый поиск по фразам или магический \gexec в psql. Чтобы никому не приходилось краснеть, мы быстро пройдёмся по всем основным и интересным моментам версии 9.6.
A Lightning talk about Postgres in Russia and #PostgreSQLRussia, Nikolay Samokhvalov, includes 2016 CfPs inviting speakers to PgDay.ru, PgConf.ru and Highload.co convefereces
Владимир Бородин: Как спать спокойно - 2015.10.14 PostgreSQLRussia.org meetu...Nikolay Samokhvalov
More: http://PostgreSQLRussia.org
Доклад посвящён резервному хранению СУБД PostgreSQL. Мы поговорим о том, как устроено хранение данных на диске и организован WAL в PostgreSQL, какие есть средства для резервного копирования и восстановления данных. Обсудим, как перестать беспокоиться за свои данные и почему PostgreSQL славится своей надёжностью.
#PostgreSQLRussia 2015.09.15 - Николай Самохвалов - 5 главных особенностей Po...Nikolay Samokhvalov
Встречи сообщества http://PostgreSQLRussia.org -
Миграция из Oracle в Postgres. Встреча в компании CUSTIS.
План встречи:
19:00 Приветственная пицца, свободное общение.
19:20 Вступление. Рассказ о CUSTIS.
19:25 Николай Самохвалов. Коротко о PostgreSQL.
19:35 Максим Трегубов, CUSTIS. Миграция данных из Oracle в Postgres. Доклад о том, как мы для одного из заказчиков тестировали переход с СУБД Oracle на Postgres. Расскажем о выборе инструмента миграции данных, настройке тестовой среды и о полученных результатах. Также немного затронем модную тему DevOps и покажем роль Ansible в миграции данных.
20:10 Вячеслав Муравлев, CUSTIS. Data Access Layer как страховка при миграции СУБД. Для многих АС миграция с одной СУБД на другую сродни наступлению страхового случая «тотал» - необходимо переписать львиную долю кода. Подстраховаться от такого ущерба можно с помощью шаблона проектирования Data Access Layer (DAL). Мы расскажем как этот подход помог нам провести первый этап миграции АС одного из заказчиков с Oracle на PostgreSQL, рассмотрим инструментарий, обсудим применимость подхода на уровне предприятия.
20:30 Иван Кухарчук, ЯНДЕКС. Как можно сэкономить на лицензиях и снизить нагрузку на Oracle, переселив отчёты в PostgreSQL.
20:50 Завершение встречи, свободное общение.
Максим Трегубов, CUSTIS. Миграция данных из Oracle в Postgres. Доклад о том, как мы для одного из заказчиков тестировали переход с СУБД Oracle на Postgres. Расскажем о выборе инструмента миграции данных, настройке тестовой среды и о полученных результатах. Также немного затронем модную тему DevOps и покажем роль Ansible в миграции данных.
Три вызова реляционным СУБД и новый PostgreSQL - #PostgreSQLRussia семинар по...Nikolay Samokhvalov
Реляционной модели скоро исполнится полвека – это огромный срок для любой технологической индустрии, не говоря уже об ИТ. За прошедшие годы этой модели было брошено немало вызовов, оказавших немалое влияние на развитие реляционных СУБД. В докладе обсуждаются три главных вызова реляционной модели, включая и NoSQL. На основе многолетнего опыта использования PostgreSQL для создания социальных сетей, объединяющих многомиллионные аудитории, наглядно демонстрируется как эта СУБД реагировала на возникающие вызовы. Речь также пойдет о «трех китах» PostgreSQL, которые не дают этой системе превратиться в монстра и позволяют обогащаться функционалом, необходимым для создания современных высоконагруженных проектов. Особое внимание в докладе уделено новым типам данных, JSON и JSONB — их возможностям, способам индексирования, а также разбору имеющихся недостатков.
2014.12.23 Николай Самохвалов, Ещё раз о JSON(b) в PostgreSQL 9.4Nikolay Samokhvalov
Тип данных JSONb – это, пожалуй, самая яркая новинка PostgreSQL 9.4, который вышел 18 декабря 2014.
Уже немало докладов и статей посвящено этому типу данных, работе с ним и индексации. Но как правило, информация в них перегружена специфичными для PostgreSQL терминами.
Запутались в моделях данных? В том, какие индексы могут вам помочь ускорить вашу работу с СУБД?
Этот доклад помогает сложить паттерн. Он для тех, кто начал использовать PostgreSQL совсем недавно или только планирует работать с ним. В нём рассказано о месте PostgreSQL в современном мире СУБД, о борьбе различных моделей данных за место под солнцем на этом рынке и то, как это отразилось на развитие Postgres.
Помимо прочего, рассказывается о том, какие вообще бывают деревья, как они помогают ускорять базы данных и почему PostgreSQL — просто райский лес для деревьев самого разного типа :)
См. также видео: http://postgresmen.ru/meetup/2014-12-23-parallels
Доклад от Parallels:
Методики тестировния производительности database-centric приложений
Описание: При работе над сложными продуктами в database-centric приложениях изменения в коде и тем более в SQL запросах к базе данных могут приводить к неожиданным падениям производительности или же деградации производительности приложения с ростом размера базы данных. Поэтому важно уметь как можно быстрее отлавливать и исправлять причины таких деградаций.
Доклад о том, как устроен процесс мониторинга производительности продукта автоматизации хостинга и облачных сервисов Parallels Automation, для которого определяющим фактором является производительность базы данных.
Компания покажет, как анализирует планы исполнения SQL запросов внутри PostgreSQL, как проверяет насколько быстро и эффективно в целом работают SQL запросы, как определяет стратегию дальнейшей оптимизации.
* приемы доступа к данным;
* прикладной класс работы с БД поверх PDO, особенности PDO;
* связки пуллов коннектов;
* API хранимых процедур;
* работа c распределенным хранилищем;
* RPC между базами на примере асинхронного геокодинга.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Elevating Tactical DDD Patterns Through Object Calisthenics
The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San Jose
1. The Art of Database
Experiments
Postgres.ai
Nikolay Samokhvalov
email: ns@postgres.ai
twitter: @postgresmen
2. About meAbout me:
Postgres experience: 13+ years (databases: 17+)
Past: Co-founder/CTO of 3 startups (total 30M+ users), all based on Postgres
Founder of #RuPostgres (1700+ members on Meetup.com, the 2nd largest globally)
Re-launched consulting practice in the SF Bay Area PostgreSQL.support
Founder of Postgres.ai – a platform aimed to automate what is not yet automated
Twitter: @postgresmen
Email: ns@postgres.ai
10. prod
monitoring
Engineer
● pg_stat_statements
○ no query examples
○ no query plans
● log analysis (pgBadger)
○ requires maintenance
○ not “live”
○ no query plans (unless auto_explain is ON)
○ usually, not full picture
(log_min_duration_statement >> 0)
Automated signals (alerts),
manual observations
The most common tools for general SQL performance analysis:
21. Four ways to improve performance:
1. Tune Postgres configuration
2. Add/remove indexes
3. Rewrite query / redesign DB schema
4. Add more resources (CPU, RAM, disks)
~280 knobs! No real-workload
and real-data
verification
(or very limited
and affecting
production)
btree, hash, GiST, SP-GiST, GIN,
RUM, BRIN, Bloom;
unique, partial, functional, covering
22. Four ways to improve performance:
1. Tune Postgres configuration
2. Add/remove indexes
3. Rewrite query / redesign DB schema
4. Add more resources (CPU, RAM, disks)
~280 knobs!
Sub-optimal
or even far
from optimal
decisions
No real-workload
and real-data
verification
(or very limited
and affecting
production)
btree, hash, GiST, SP-GiST, GIN,
RUM, BRIN, Bloom;
unique, partial, functional, covering
23. Four ways to improve performance:
1. Tune Postgres configuration
2. Add/remove indexes
3. Rewrite query / redesign DB schema
4. Add more resources (CPU, RAM, disks)
~280 knobs!
Expert DBA skips many steps…
Black magic!
Sub-optimal
or even far
from optimal
decisions
No real-workload
and real-data
verification
(or very limited
and affecting
production)
btree, hash, GiST, SP-GiST, GIN,
RUM, BRIN, Bloom;
unique, partial, functional, covering
25. A real-life example. default_statistics_target: 100 vs 1000
The idea (from articles/blogs):
Default detault_statistics_target (100) is too low.
Let’s change it to 1000!
...Sounds good!
...Deployed!
26. But did it really improve everything? …anything?
27. But did it really improve everything? …anything?
Let’s check with Postgres.ai platform!
28. A real-life example. default_statistics_target: 100 vs 10002 years later we decided to make DB experiment to compare 100 and 1000:
29. A real-life example. default_statistics_target: 100 vs 1000
“before”:
default_statistics_target = 100
“after”:
default_statistics_target = 1000
30. A real-life example. default_statistics_target: 100 vs 1000
“before”:
default_statistics_target = 100
In general, the new value gives
better performance. Great!
“after”:
default_statistics_target = 1000
31. A real-life example. default_statistics_target: 100 vs 1000
“before”:
default_statistics_target = 100
Scroll down, a couple of query groups…
“after”:
default_statistics_target = 1000
32. A real-life example. default_statistics_target: 100 vs 1000
“before”:
default_statistics_target = 100
“after”:
default_statistics_target = 1000
One query group became much slower
after the change applied!
resolved with:
“ALTER TABLE/INDEX … ALTER COLUMN SET
STATISTICS …“
for particular table/index and column
36. GFDL and CC-BY-2.5, Wikipedia.org
One more example...
Aviation, aeronautics, sport cars, etc:
wind tunnel
37. Experiments in other fields:
1. Are conducted in the special “lab” environment, not prod
(call it “staging”)
2. With very detailed, deep analysis
3. Highly automated. Multiple experimental runs, faster, less expensive
43. What is a Database Experiment?
The Input:
1. Environment
hardware, software, OS, FS,
Postgres version, configuration
44. What is a Database Experiment?
The Input:
1. Environment
hardware, software, OS, FS,
Postgres version, configuration
2. Object
some database (e.g. cloned production)
45. What is a Database Experiment?
The Input:
1. Environment
hardware, software, OS, FS,
Postgres version, configuration
2. Object
some database (e.g. cloned production)
3. Workload
some SQL
46. What is a Database Experiment?
The Input:
1. Environment
hardware, software, OS, FS,
Postgres version, configuration
2. Object
some database (e.g. cloned production)
3. Workload
some SQL
4. Delta (optional; multiple values allowed)
some Postgres configuration change, or
new index
47. What is a Database Experiment?
The Input:
1. Environment
hardware, software, OS, FS,
Postgres version, configuration
2. Object
some database (e.g. cloned production)
3. Workload
some SQL
4. Delta (optional; multiple values allowed)
some Postgres configuration change, or
new index
The Output:
1. Summary
better or worse, in general?
48. What is a Database Experiment?
The Input:
1. Environment
hardware, software, OS, FS,
Postgres version, configuration
2. Object
some database (e.g. cloned production)
3. Workload
some SQL
4. Delta (optional; multiple values allowed)
some Postgres configuration change, or
new index
The Output:
1. Summary
better or worse, in general?
2. Artifacts
any useful “telemetry” data
49. What is a Database Experiment?
The Input:
1. Environment
hardware, software, OS, FS,
Postgres version, configuration
2. Object
some database (e.g. cloned production)
3. Workload
some SQL
4. Delta (optional; multiple values allowed)
some Postgres configuration change, or
new index
The Output:
1. Summary
better or worse, in general?
2. Artifacts
any useful “telemetry” data
3. Per-query deep SQL analysis
each query: better or worse?
50. What is a Database Experiment?
The Input:
1. Environment
hardware, software, OS, FS,
Postgres version, configuration
2. Object
some database (e.g. cloned production)
3. Workload
some SQL
4. Delta (optional; multiple values allowed)
some Postgres configuration change, or
new index
The Output:
1. Summary
better or worse, in general?
2. Artifacts
any useful “telemetry” data
3. Per-query deep SQL analysis
each query: better or worse?
+ histograms:
ms
query
group #
before
after
51. Is it possible with existing tools? Yes!
● Docker
● pgreplay
● pg_stat_***
● auto_explain
● pgBadger (with JSON output)
● AWS EC2 spot instances
All the blocks exist already!
52. Is it possible with existing tools? Yes!
● Docker
● pgreplay
● pg_stat_***
● auto_explain
● pgBadger (with JSON output)
● AWS EC2 spot instances
All the blocks exist already!
Nancy CLI: all these blocks (and more!), integrated and automated
https://github.com/postgres-ai/nancy
53. DIY automated pipeline for DB experiments and optimization
How to automate database optimization using ecosystem tools and AWS?
Analyze:
● pg_stat_statements
● auto_explan
● pgBadger to parse logs, use JSON output
● pg_query to group queries better
● pg_stat_kcache to analyze FS-level ops
Configuration:
● annotated.conf, pgtune, pgconfigurator, postgresqlco.nf
● ottertune
Suggested indexes (internal “what-if” API w/o actual execution)
● (useful: pgHero, POWA, HypoPG, dexter, plantuner)
Conduct experiments:
● pgreplay to replay logs (different log_line_prefix, you need to handle it)
● EC2 spot instances
Machine learning
● MADlib
pgBadger:
● Grouping queries can be implemented better (see pg_query)
● Makes all queries lower cased (hurts "camelCased" names)*
● Doesn’t really support plans (auto_explain)*
pgreplay and pgBadger are not friends,
require different log formats
*)
Fixed/improved in pgBadger 10.0
54. Postgres.ai — artificial DBA/DBRE assistants
AI-based cloud-friendly platform to automate database administration
54
Steve
AI-based expert in capacity planning and
database tuning
Joe
AI-based expert in query optimization and
Postgres indexes
Nancy
AI-based expert in database experiments.
Conducts experiments and presents
results to human and artificial DBAs
Sign up for early access:
https://Postgres.ai
55. Postgres.ai — artificial DBA/DBRE assistants
AI-based cloud-friendly platform to automate database administration
55
Steve
AI-based expert in capacity planning and
database tuning
Joe
AI-based expert in query optimization and
Postgres indexes
Nancy
AI-based expert in database experiments.
Conducts experiments and presents
results to human and artificial DBAs
Sign up for early access:
https://Postgres.ai
56. Meet Nancy CLI (open source)
Nancy CLI https://github.com/postgres-ai/nancy
● custom docker image (Postgres with extensions & tools)
● nancy prepare-workload to convert Postgres logs (now only .csv)
to workload binary file
● nancy run to run experiments
● able to run locally (any machine) on in EC2 spot instance (low price!),
including i3.*** instances (with NVMe)
● fully automated management of EC2 spots
58. Nancy CLI – Use Cases
- Deep performance analysis (“clean run”), not affecting production
- Change management
- Regression testing when upgrading (software, hardware)
- Verify config optimization ideas
- Find optimal configuration
- … and train ML models for Postgres AI!
59. Part 4. Real life challenges*
______________________________
*
based on experience in 4 companies
with small to mid-sized databases (up to few TBs)
and OLTP workloads (up to 15k TPS)
61. Fears of log_min_duration_statement = 0
Evaluate the impact of log_min_duration_statement = 0 (quickly and w/o any changes!):
https://gist.github.com/NikolayS/08d9b7b4845371d03e195a8d8df43408 (pay attention to comments)
Only ~300 kB/s expected,
~800 log entries per second
63. log_destination = syslog
logging_collector = off
or
log_destination = stderr
logging_collector = off
or
log_destination = csvlog
logging_collector = on
What is better
for intensive logging?
67. # Postgres 9.6.10
pgbench -U postgres -j2 -c24 -T60 -rnf - <<EOF
select;
EOF
All-queries logging with syslog is
● 44x slower compared to “no logging”
● 33x slower compared to stderr /
logging collector
Be careful with syslog / journald
68. ● Forecast expected IOPS and write bandwidth
● Analyze your logging system (syslog/journald slows down everything, use
STDERR, logging collector)
● If forecasted impact is too high (say, dozens of MBs), consider sampling
( "SET log_min_duration_statement = 0;" in particular, random sessions)
Advices for log_min_duration_statement = 0
69. Nancy real-world examples: shared_buffers
postila.ru,
Real workload (5 min),
61 GB RAM (ec2 i3.2xlarge),
DB size: ~500GB
Various shared_buffers
values
shared_buffers = 15GB (25%)
If we go from 25% o higher values (~80%), we improve SQL performance ~50%
70. Nancy real-world examples: seq_page_cost
GitLab.com: random_page_cost = 1.5 and seq_page_cost = 1
Decision was made to switch to seq_page_cost = 4
DB experiments with Nancy CLI were made to check was this decision good in
terms of performance.
Results:
https://gitlab.com/gitlab-com/gl-infra/infrastructure/issues/4835#note_106669373
– in general, SQL performance improved ~40%
WIP here, it is an open question, why is it so.
72. Nancy real-world examples: educate yourself
PostgreSQL Documentation “19.5. Write Ahead Log”
https://www.postgresql.org/docs/current/static/runtime-config-wal.html
Just conduct DB experiment with Nancy CLI,
use --keep-alive 3600 and compare!
75. Various ways to create an experimental database
● plain text pg_dump
○ restoration is very slow (1 vcpu utilized)
○ “logical” – physical structure is lost (cannot experiment with bloat, etc)
○ small (if compressed)
○ “snapshot” only
● pg_dump with either -Fd (“directory”) or -Fc (“custom”):
○ restoration is faster (multiple vCPUs, -j option)
○ “logical” (again: bloat, physical layout is “lost”)
○ small (because compressed)
○ “snapshot” only
● pg_basebackup + WALs, point-in-time recovery (PITR), possibly with help from WAL-E, WAL-G, pgBackRest
○ less reliable, sometimes there issues (especially if 3rd party tools involved - e.g. WAL-E & WAL-G don’t support
tablespaces, there are bugs sometimes, etc)
○ “physical”: bloat and physical structure is preserved
○ not small – ~ size of the DB
○ can “walk in time” (PITR)
○ requires warm-up procedure (data is not in the memory!)
● AWS RDS: create a replica + promote it
○ no Spots :-/
○ Lazy Load is tricky (it looks like the DB is there but it’s very slow – warm-up is needed)
● Snapshots
● Ideas for serialization
○ Stop Postgres / rsync “back” or re-copy locally on NVMe / start Postgres
76. How can we speed up experimental runs?
● Prepare the EC2 instance(s) in advance and keep it
● Prepare EBS volume(s) only (perhaps, using an instance of the different
type) and keep it ready. When attached to the new instance, do warm-up
● Resource re-usage:
○ reuse docker container
○ reuse EC2 instance
○ serialize experimental runs serialization (DDL Do/Undo; VACUUM FULL; cleanup)
● Partial database snapshots (dump/restore only needed tables)
77. The future development of Nancy CLI
● Speedup DB creation
● ZFS/XFS snapshots to revert PGDATA state within seconds
● Support GCP
● More artifacts delivered: pg_stat_kcache, etc
● nancy see-report to print the summary + top-30 queries
● nancy compare-reports to print the “diff” for 2+ reports (the summary + numbers for top-30 queries,
ordered by by total time based on the 1st report)
● Postgres 11
● pgbench -i for database initialization
● pgbench to generate multithreaded synthetic workload
● Workload analysis: automatically detect “N+1 SELECT” when running workload
● Better support for the serialization of experimental runs
● Better support for multiple runs https://github.com/postgres-ai/nancy/pull/97
○ interval with step
○ gradient descent
● Provide costs estimation (time + money)
● Rewrite in Python or Go
Feedback/contributions welcome
https://github.com/postgres-ai/nancy
78. Challenge: security issues
Problem: a developer doesn’t have access to production.
Nancy works with production data/workload.
What about permissions and data protection?
79. Challenge: security issues
Problem: a developer doesn’t have access to production.
Nancy works with production data/workload.
What about permissions and data protection?
Possible solutions:
● Option 1: allow using Nancy CLI only to those who already have access
production (DBAs, SREs, DBREs)
● Option 2: obfuscate data when preparing a DB clone (no universal
solution yet, TODO)
● Option 3: allow access only to GUI, hide/obfuscate parameters (TODO)
80. Challenge: reliable results
Issues:
1. Single runs is not enough (fluctuations) – must repeat
2. “Before”/”after” runs on 2 different machines / EC2 nodes – “not fair” comparison
(defective hardware, throttling)
Solutions (ideally: combination of them):
● Sequential runs
● 4+ iterations of each experimental run
● “Baseline benchmark” https://github.com/postgres-ai/nancy/issues/94