The Query Optimizer is the “brain” of your Postgres database. It interprets SQL queries and determines the fastest method of execution. Using the EXPLAIN command , this presentation shows how the optimizer interprets queries and determines optimal execution.
This presentation will give you a better understanding of how Postgres optimally executes their queries and what steps you can take to understand and perhaps improve its behavior in your environment.
To listen to the webinar recording, please visit EnterpriseDB.com > Resources > Ondemand Webcasts
If you have any questions please email sales@enterprisedb.com
About Flexible Indexing
Postgres’ rich variety of data structures and data-type specific indexes can be confusing for newer and experienced Postgres users alike who may be unsure when and how to use them. For example, gin indexing specializes in the rapid lookup of keys with many duplicates — an area where traditional btree indexes perform poorly. This is particularly useful for json and full text searching. GiST allows for efficient indexing of two-dimensional values and range types.
To listen to the recorded presentation with Bruce Momjian, visit Enterprisedb.com > Resources > Webcasts > Ondemand Webcasts.
For product information and subscriptions, please email sales@enterprisedb.com.
The optimizer is the "brain" of the database, interpreting SQL queries and determining the fastest method of execution. In this talk by Bruce Momjian, Senior Database Architect at EnterpriseDB and co-founder of the PostgreSQL Global Development Group, uses the explain command to show how the optimizer interprets queries and determines optimal execution. The talk aimed at assisting developers and administrators in understanding how Postgres optimally executes their queries and what steps they can take to understand and perhaps improve its behavior.
Robert Haas
Why does my query need a plan? Sequential scan vs. index scan. Join strategies. Join reordering. Joins you can't reorder. Join removal. Aggregates and DISTINCT. Using EXPLAIN. Row count and cost estimation. Things the query planner doesn't understand. Other ways the planner can fail. Parameters you can tune. Things that are nearly always slow. Redesigning your schema. Upcoming features and future work.
Spencer Christensen
There are many aspects to managing an RDBMS. Some of these are handled by an experienced DBA, but there are a good many things that any sys admin should be able to take care of if they know what to look for.
This presentation will cover basics of managing Postgres, including creating database clusters, overview of configuration, and logging. We will also look at tools to help monitor Postgres and keep an eye on what is going on. Some of the tools we will review are:
* pgtop
* pg_top
* pgfouine
* check_postgres.pl.
Check_postgres.pl is a great tool that can plug into your Nagios or Cacti monitoring systems, giving you even better visibility into your databases.
PostgreSQL High-Performance Cheat Sheets contains quick methods to find performance issues.
Summary of the course so that when problems arise, you are able to easily uncover what are the performance bottlenecks.
About Flexible Indexing
Postgres’ rich variety of data structures and data-type specific indexes can be confusing for newer and experienced Postgres users alike who may be unsure when and how to use them. For example, gin indexing specializes in the rapid lookup of keys with many duplicates — an area where traditional btree indexes perform poorly. This is particularly useful for json and full text searching. GiST allows for efficient indexing of two-dimensional values and range types.
To listen to the recorded presentation with Bruce Momjian, visit Enterprisedb.com > Resources > Webcasts > Ondemand Webcasts.
For product information and subscriptions, please email sales@enterprisedb.com.
The optimizer is the "brain" of the database, interpreting SQL queries and determining the fastest method of execution. In this talk by Bruce Momjian, Senior Database Architect at EnterpriseDB and co-founder of the PostgreSQL Global Development Group, uses the explain command to show how the optimizer interprets queries and determines optimal execution. The talk aimed at assisting developers and administrators in understanding how Postgres optimally executes their queries and what steps they can take to understand and perhaps improve its behavior.
Robert Haas
Why does my query need a plan? Sequential scan vs. index scan. Join strategies. Join reordering. Joins you can't reorder. Join removal. Aggregates and DISTINCT. Using EXPLAIN. Row count and cost estimation. Things the query planner doesn't understand. Other ways the planner can fail. Parameters you can tune. Things that are nearly always slow. Redesigning your schema. Upcoming features and future work.
Spencer Christensen
There are many aspects to managing an RDBMS. Some of these are handled by an experienced DBA, but there are a good many things that any sys admin should be able to take care of if they know what to look for.
This presentation will cover basics of managing Postgres, including creating database clusters, overview of configuration, and logging. We will also look at tools to help monitor Postgres and keep an eye on what is going on. Some of the tools we will review are:
* pgtop
* pg_top
* pgfouine
* check_postgres.pl.
Check_postgres.pl is a great tool that can plug into your Nagios or Cacti monitoring systems, giving you even better visibility into your databases.
PostgreSQL High-Performance Cheat Sheets contains quick methods to find performance issues.
Summary of the course so that when problems arise, you are able to easily uncover what are the performance bottlenecks.
Have you ever wondered about the internal structure of indexes in postgres? What's the difference between a btree and a gin index? How are indexes searches etc. ? This talk is about this.
PostgreSQL, performance for queries with groupingAlexey Bashtanov
The talk will cover PostgreSQL grouping and aggregation facilities and best practices of using them in fast and efficient manner.
In 40 minutes the audience will learn several techniques to optimise queries containing GROUP BY, DISTINCT or DISTINCT ON keywords.
The evolution of Apache Calcite and its CommunityJulian Hyde
Apache Calcite is an open source framework for building databases, and includes a SQL parser, relational algebra, and a highly extensible query optimizer.
It has achieved wide adoption, used in many commercial products, open source projects, and as a test bed for computer science research.
But there is a bootstrap problem: If software is written by a community of contributors, and each contributor acts in their own self-interest, how do you get the first working version of the product? The answer is in the story of how the technology evolved, and how the community evolved with it, and in this talk we tell that story.
This one is about advanced indexing in PostgreSQL. It guides you through basic concepts as well as through advanced techniques to speed up the database.
All important PostgreSQL Index types explained: btree, gin, gist, sp-gist and hashes.
Regular expression indexes and LIKE queries are also covered.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
Spark SQL: Another 16x Faster After Tungsten: Spark Summit East talk by Brad ...Spark Summit
Apache Spark 2.1.0 boosted the performance of Apache Spark SQL due to Project Tungsten software improvements. Another 16x times faster has been achieved by using Oracle’s innovations for Apache Spark SQL. This 16x improvement is made possible by using Oracle’s Software in Silicon accelerator offload technologies.
Apache Spark SQL In-memory performance is becoming more important due to many factors. Users are now performing more advanced SQL processing on multi-terabyte workloads. In addition on-prem and cloud servers are getting larger physical memory to enable storing these huge workloads be stored in memory. In this talk we will look at using Spark SQL in feature creation, feature generation within pipelines for Spark ML.
This presentation will explore workloads at scale and with complex interactions. We also provide best practices and tuning suggestion to support these kinds of workloads on real applications in cloud deployments. In addition ideas for next generation Tungsten project will also be discussed.
SQL offers many powerful techniques for analyzing your data out of the box, but being also extendable if you are still missing something. Now much more easier in 18c with polymorphic table functions (PTF). As an evolution of table functions, PTF is invoked in the FROM clause and is capable to encapsulate the custom processing of the input data, whereas the input row type does not have to be known at design time and the output row type may first be determined by the actual PTF invocation parameters. This session will give you an introduction based on simple examples. Discover how you can develop your own flexible and self-describing extensions while focusing on business logic and leaving complex things like parallel execution to the database.
Postgres-XC as a Key Value Store Compared To MongoDBMason Sharp
This presentation discusses how Postgres-XC can be used as a PostgreSQL-based key-value store using features like hstore and JSON. It also compares performance to MongoDB for a read workload
Have you ever wondered about the internal structure of indexes in postgres? What's the difference between a btree and a gin index? How are indexes searches etc. ? This talk is about this.
PostgreSQL, performance for queries with groupingAlexey Bashtanov
The talk will cover PostgreSQL grouping and aggregation facilities and best practices of using them in fast and efficient manner.
In 40 minutes the audience will learn several techniques to optimise queries containing GROUP BY, DISTINCT or DISTINCT ON keywords.
The evolution of Apache Calcite and its CommunityJulian Hyde
Apache Calcite is an open source framework for building databases, and includes a SQL parser, relational algebra, and a highly extensible query optimizer.
It has achieved wide adoption, used in many commercial products, open source projects, and as a test bed for computer science research.
But there is a bootstrap problem: If software is written by a community of contributors, and each contributor acts in their own self-interest, how do you get the first working version of the product? The answer is in the story of how the technology evolved, and how the community evolved with it, and in this talk we tell that story.
This one is about advanced indexing in PostgreSQL. It guides you through basic concepts as well as through advanced techniques to speed up the database.
All important PostgreSQL Index types explained: btree, gin, gist, sp-gist and hashes.
Regular expression indexes and LIKE queries are also covered.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
Spark SQL: Another 16x Faster After Tungsten: Spark Summit East talk by Brad ...Spark Summit
Apache Spark 2.1.0 boosted the performance of Apache Spark SQL due to Project Tungsten software improvements. Another 16x times faster has been achieved by using Oracle’s innovations for Apache Spark SQL. This 16x improvement is made possible by using Oracle’s Software in Silicon accelerator offload technologies.
Apache Spark SQL In-memory performance is becoming more important due to many factors. Users are now performing more advanced SQL processing on multi-terabyte workloads. In addition on-prem and cloud servers are getting larger physical memory to enable storing these huge workloads be stored in memory. In this talk we will look at using Spark SQL in feature creation, feature generation within pipelines for Spark ML.
This presentation will explore workloads at scale and with complex interactions. We also provide best practices and tuning suggestion to support these kinds of workloads on real applications in cloud deployments. In addition ideas for next generation Tungsten project will also be discussed.
SQL offers many powerful techniques for analyzing your data out of the box, but being also extendable if you are still missing something. Now much more easier in 18c with polymorphic table functions (PTF). As an evolution of table functions, PTF is invoked in the FROM clause and is capable to encapsulate the custom processing of the input data, whereas the input row type does not have to be known at design time and the output row type may first be determined by the actual PTF invocation parameters. This session will give you an introduction based on simple examples. Discover how you can develop your own flexible and self-describing extensions while focusing on business logic and leaving complex things like parallel execution to the database.
Postgres-XC as a Key Value Store Compared To MongoDBMason Sharp
This presentation discusses how Postgres-XC can be used as a PostgreSQL-based key-value store using features like hstore and JSON. It also compares performance to MongoDB for a read workload
An overview of Postgres-XC is provided. Postgres-XC is a free, open source, PostgreSQL based write scalable cluster. It runs on multiple servers, is fully ACID and consistent. Postgres-XC is a traditional relational alternative to cases where NoSQL solutions are being considered.
This presentation was given in San Francisco August 7th by Mason Sharp, one of the original architects of Postgres-XC and co-founder of StormDB (http://www.stormdb.com).
How the query planner in PostgreSQL works? Index access methods, join execution types, aggregation & pipelining. Optimizing queries with WHERE conditions, ORDER BY and GROUP BY. Composite indexes, partial and expression indexes. Exploiting assumptions about data and denormalization.
Modern SQL in Open Source and Commercial DatabasesMarkus Winand
SQL has gone out of fashion lately—partly due to the NoSQL movement, but mostly because SQL is often still used like 20 years ago. As a matter of fact, the SQL standard continued to evolve during the past decades resulting in the current release of 2016. In this session, we will go through the most important additions since the widely known SQL-92. We will cover common table expressions and window functions in detail and have a very short look at the temporal features of SQL:2011 and row pattern matching from SQL:2016.
Links:
http://modern-sql.com/
http://winand.at/
http://sql-performance-explained.com/
Explaining the Postgres Query Optimizer - PGCon 2014EDB
The optimizer is the "brain" of the database, interpreting SQL queries and determining the fastest method of execution.
This talk uses the EXPLAIN command to show how the optimizer interprets queries and determines optimal execution. Examples include scan methods, index selection, join types, and how ANALYZE statistics influence their selection. The talk will assist developers and administrators in understanding how Postgres optimally executes their queries and what steps they can take to understand and perhaps improve its behavior.
Correctness and Performance of Apache Spark SQL with Bogdan Ghit and Nicolas ...Databricks
In this talk, we present a comprehensive framework we developed at Databricks for assessing the correctness, stability, and performance of our Spark SQL engine. Apache Spark is one of the most actively developed open source projects, with more than 1200 contributors from all over the world. At this scale and pace of development, mistakes bound to happen. We will discuss various approaches we take, including random query generation, random data generation, random fault injection, and longevity stress tests. We will demonstrate the effectiveness of the framework by highlighting several correctness issues we have found through random query generation and critical performance regressions we were able to diagnose within hours due to our automated benchmarking tools.
In this talk, we present a comprehensive framework for assessing the correctness, stability, and performance of the Spark SQL engine. Apache Spark is one of the most actively developed open source projects, with more than 1200 contributors from all over the world. At this scale and pace of development, mistakes bound to happen. To automatically identify correctness issues and performance regressions, we have build a testing pipeline that consists of two complementary stages: randomized testing and benchmarking.
Randomized query testing aims at extending the coverage of the typical unit testing suites, while we use micro and application-like benchmarks to measure new features and make sure existing ones do not regress. We will discuss various approaches we take, including random query generation, random data generation, random fault injection, and longevity stress tests. We will demonstrate the effectiveness of the framework by highlighting several correctness issues we have found through random query generation and critical performance regressions we were able to diagnose within hours due to our automated benchmarking tools.
Fast and Reliable Apache Spark SQL EngineDatabricks
Building the next generation Spark SQL engine at speed poses new challenges to both automation and testing. At Databricks, we are implementing a new testing framework for assessing the quality and performance of new developments as they produced. Having more than 1,200 worldwide contributors, Apache Spark follows a rapid pace of development. At this scale, new testing tooling such as random query and data generation, fault injection, longevity stress, and scalability tests are essential to guarantee a reliable and performance Spark later in production. By applying such techniques, we will demonstrate the effectiveness of our testing infrastructure by drilling-down into cases where correctness and performance regressions have been found early. In addition, showing how they have been root-caused and fixed to prevent regressions in production and boosting the continuous delivery of new features.
Antes de migrar de 10g a 11g o 12c, tome en cuenta las siguientes consideraciones. No es tan sencillo como simplemente cambiar de motor de base de datos, se necesita hacer consideraciones a nivel del aplicativo.
SQL Performance Tuning and New Features in Oracle 19cRachelBarker26
What's new in Oracle 19c (and CMiC R12) and the reporting software Jaspersoft Studios. If you are not interested in Jasper go ahead and skip to page 26. Explains how to read an execution plan and what to look for in an optimized execution plan.
Managing Statistics for Optimal Query PerformanceKaren Morton
Half the battle of writing good SQL is in understanding how the Oracle query optimizer analyzes your code and applies statistics in order to derive the “best” execution plan. The other half of the battle is successfully applying that knowledge to the databases that you manage. The optimizer uses statistics as input to develop query execution plans, and so these statistics are the foundation of good plans. If the statistics supplied aren’t representative of your actual data, you can expect bad plans. However, if the statistics are representative of your data, then the optimizer will probably choose an optimal plan.
Cost-Based Optimizer in Apache Spark 2.2 Databricks
Apache Spark 2.2 ships with a state-of-art cost-based optimization framework that collects and leverages a variety of per-column data statistics (e.g., cardinality, number of distinct values, NULL values, max/min, avg/max length, etc.) to improve the quality of query execution plans. Leveraging these reliable statistics helps Spark to make better decisions in picking the most optimal query plan. Examples of these optimizations include selecting the correct build side in a hash-join, choosing the right join type (broadcast hash-join vs. shuffled hash-join) or adjusting a multi-way join order, among others. In this talk, we’ll take a deep dive into Spark’s cost based optimizer and discuss how we collect/store these statistics, the query optimizations it enables, and its performance impact on TPC-DS benchmark queries.
Cost-Based Optimizer in Apache Spark 2.2 Ron Hu, Sameer Agarwal, Wenchen Fan ...Databricks
Apache Spark 2.2 ships with a state-of-art cost-based optimization framework that collects and leverages a variety of per-column data statistics (e.g., cardinality, number of distinct values, NULL values, max/min, avg/max length, etc.) to improve the quality of query execution plans. Leveraging these reliable statistics helps Spark to make better decisions in picking the most optimal query plan. Examples of these optimizations include selecting the correct build side in a hash-join, choosing the right join type (broadcast hash-join vs. shuffled hash-join) or adjusting a multi-way join order, among others. In this talk, we’ll take a deep dive into Spark’s cost based optimizer and discuss how we collect/store these statistics, the query optimizations it enables, and its performance impact on TPC-DS benchmark queries.
In this first of a series of presentations, we'll overview the differences between SQL and PL/SQL, and the first steps in optimization, as understanding RULE vs. COST, and how to slash 90% response time in data extractions running in SQL*Plus.
Cloud Migration Paths: Kubernetes, IaaS, or DBaaSEDB
Moving to the cloud is hard, and moving Postgres databases to the cloud is even harder. Public cloud or private cloud? Infrastructure as a Service (IaaS), or Platform as a Service (PaaS)? Kubernetes for the application, or for the database and the application? This talk will juxtapose self-managed Kubernetes and container-based database solutions, Postgres deployments on IaaS, and Postgres DBaaS solutions of which EDB’s DBaaS BigAnimal is the latest example.
Die 10 besten PostgreSQL-Replikationsstrategien für Ihr UnternehmenEDB
Dieses Webinar hilft Ihnen, die Unterschiede zwischen den verschiedenen Replikationsansätzen zu verstehen, die Anforderungen der jeweiligen Strategie zu erkennen und sich über die Möglichkeiten klar zu werden, was mit jeder einzelnen zu erreichen ist. Damit werden Sie hoffentlich eher in der Lage sein, herauszufinden, welche PostgreSQL-Replikationsarten Sie wirklich für Ihr System benötigen.
- Wie physische und logische Replikation in PostgreSQL funktionieren
- Unterschiede zwischen synchroner und asynchroner Replikation
- Vorteile, Nachteile und Herausforderungen bei der Multi-Master-Replikation
- Welche Replikationsstrategie für unterschiedliche Use-Cases besser geeignet ist
Referent:
Borys Neselovskyi, Regional Sales Engineer DACH, EDB
------------------------------------------------------------
For more #webinars, visit http://bit.ly/EDB-Webinars
Download free #PostgreSQL whitepapers: http://bit.ly/EDB-Whitepapers
Read our #Postgres Blog http://bit.ly/EDB-Blogs
Follow us on Facebook at http://bit.ly/EDB-FB
Follow us on Twitter at http://bit.ly/EDB-Twitter
Follow us on LinkedIn at http://bit.ly/EDB-LinkedIn
Reach us via email at marketing@enterprisedb.com
Cuando busca alternativas a Oracle en la nube, hacer el cambio puede parecer un trabajo duro. Entendemos que la migración involucra más que solo la base de datos. La compatibilidad es un punto clave, especialmente cuando se consideran los recursos que posiblemente ya haya invertido en Oracle, como por ejemplo el código de aplicación específico de Oracle.Este seminario web explorará las opciones y las principales consideraciones al pasar de las bases de datos de Oracle a la nube.
- Revisión detallada de las ofertas de bases de datos disponibles en la nube
- Factores críticos que se deben considerar considerar para elegir la oferta en la nube más adecuada
- Cómo la experiencia de EDB con PostgreSQL puede ayudarlo en su decisión
- Demostración de BigAnimal de EDB
Présentateur:
Sergio Romera, Senior Sales Engineer EMEA, EDB
------------------------------------------------------------
For more #webinars, visit http://bit.ly/EDB-Webinars
Download free #PostgreSQL whitepapers: http://bit.ly/EDB-Whitepapers
Read our #Postgres Blog http://bit.ly/EDB-Blogs
Follow us on Facebook at http://bit.ly/EDB-FB
Follow us on Twitter at http://bit.ly/EDB-Twitter
Follow us on LinkedIn at http://bit.ly/EDB-LinkedIn
Reach us via email at marketing@enterprisedb.com
Database come PostgreSQL non possono girare su Kubernetes. Questo è il ritornello che sentiamo continuamente, ma al tempo stesso la motivazione per noi di EDB di abbattere questo muro, una volta per tutte.
In questo webinar parleremo della nostra avventura finora per portare PostgreSQL su Kubernetes. Scopri perché crediamo che fare benchmark di storage e del database prima di andare in produzione porti a una più sana e longeva vita di un DBMS, anche su Kubernetes.
Condivideremo il nostro processo, i risultati fin qui ottenuti e sveleremo i nostri piani per il futuro con Cloud Native PostgreSQL.
Las Variaciones de la Replicación de PostgreSQLEDB
Replicación física, replicación lógica, síncrona, asíncrona, multi-maestro, escalabilidad horizontal, etc. Son muchos los términos asociados con la replicación de bases de datos. En esta charla revisaremos los conceptos fundamentales detrás de cada variación de la replicación de PostgreSQL, y en qué casos conviene usar una o la otra. La presentación incluye una parte práctica con demostraciones aunque no será un tutorial sobre como configurar un cluster. El enfoque está en entender cada variación para elegir la mejor dependiendo del caso de uso.
Cosas que aprenderán:
- Cómo funciona la replicación física en PostgreSQL
- Cómo funciona la replicación lógica en PostgreSQL
- Diferencias entre replicación síncrona y asíncrona
- Qué es replicación multi-maestro
NoSQL and Spatial Database Capabilities using PostgreSQLEDB
PostgreSQL is an object-relational database system. NoSQL on the other hand is a non-relational database and is document-oriented. Learn how the PostgreSQL database gives one the flexible options to combine NoSQL workloads with the relational query power by offering JSON data types. With PostgreSQL, new capabilities can be developed and plugged into the database as required.
Attend this webinar to learn:
- The new features and capabilities in PostgreSQL for new workloads, requiring greater flexibility in the data model
- NoSQL with JSON, Hstore and its performance and features for enterprises
- Spatial SQL - advanced features in PostGIS application with PostGIS extension
"Why use PgBouncer? It’s a lightweight, easy to configure connection pooler and it does one job well. As you’d expect from a talk on connection pooling, we’ll give a brief summary of connection pooling and why it increases efficiency. We’ll look at when not to use connection pooling, and we’ll demonstrate how to configure PgBouncer and how it works. But. Did you know you can also do this? 1. Scaling PgBouncer PgBouncer is single threaded which means a single instance of PgBouncer isn’t going to do you much good on a multi-threaded and/or multi-CPU machine. We’ll show you how to add more PgBouncer instances so you can use more than one thread for easy scaling. 2. Read-write / read only routing Using different pgBouncer databases you can route read-write traffic to the primary database and route read-only traffic to a number of standby databases. 3. Load balancing When we use multiple PgBouncer instances, load balancing comes for free. Load balancing can be directed to different standbys, and weighted according to ratios of load. 4. Silent failover You can perform silent failover during promotion of a new primary (assuming you have a VIP/DNS etc that always points to the primary). 5. And even: DoS prevention and protection from “badly behaved” applications! By using distinct port numbers you can provide database connections which deal with sudden bursts of incoming traffic in very different ways, which can help prevent the database from becoming swamped during high activity periods. You should leave the presentation wondering if there is anything PgBouncer can’t do."
In this talk I'll discuss how we can combine the power of PostgreSQL with TensorFlow to perform data analysis. By using the pl/python3 procedural language we can integrate machine learning libraries such as TensorFlow with PostgreSQL, opening the door for powerful data analytics combining SQL with AI. Typical use-cases might involve regression analysis to find relationships in an existing dataset and to predict results based on new inputs, or to analyse time series data and extrapolate future data taking into account general trends and seasonal variability whilst ignoring noise. Python is an ideal language for building custom systems to do this kind of work as it gives us access to a rich ecosystem of libraries such as Pandas and Numpy, in addition to TensorFlow itself.
Practical Partitioning in Production with PostgresEDB
Has your table become too large to handle? Have you thought about chopping it up into smaller pieces that are easier to query and maintain? What if it's in constant use? An introduction to the problems that can arise and how PostgreSQL's partitioning features can help, followed by a real-world scenario of partitioning an existing huge table on a live system. We will be looking at the problems caused by having very large tables in your database and how declarative table partitioning in Postgres can help. Also, how to perform dimensioning before but also after creating huge tables, partitioning key selection, the importance of upgrading to get the latest Postgres features and finally we will dive into a real-world scenario of having to partition an existing huge table in use on a production system.
There have been plenty of “explaining EXPLAIN” type talks over the years, which provide a great introduction to it. They often also cover how to identify a few of the more common issues through it. EXPLAIN is a deep topic though, and to do a good introduction talk, you have to skip over a lot of the tricky bits. As such, this talk will not be a good introduction to EXPLAIN, but instead a deeper dive into some of the things most don’t cover. The idea is to start with some of the more complex and unintuitive calculations needed to work out the relationships between operations, rows, threads, loops, timings, buffers, CTEs and subplans. Most popular tools handle at least several of these well, but there are cases where they don’t that are worth being conscious of and alert to. For example, we’ll have a look at whether certain numbers are averaged per-loop or per-thread, or both. We’ll also cover a resulting rounding issue or two to be on the lookout for. Finally, some per-operation timing quirks are worth looking out for where CTEs and subqueries are concerned, for example CTEs that are referenced more than once. As time allows, we can also look at a few rarer issues that can be spotted via EXPLAIN, as well as a few more gotchas that we’ve picked up along the way. This includes things like spotting when the query is JIT, planning, or trigger time dominated, spotting the signs of table and index bloat, issues like lossy bitmap scans or index-only scans fetching from the heap, as well as some things to be aware of when using auto_explain.
Internet of Things is a currently a burgeoning market, and is often associated with specialized data-stores. However PostgreSQL is just as capable at this use-case and can offer some compelling advantages. We’ll explore ways to store IoT data in PostgreSQL covering various ways to store and structure this kind of data. How range types and differing types of indexes can be of use. Also taking a quick look at some extensions designed for this use case. Then looking at powerful SQL features which can really help when analyzing IoT data streams, and how the power of a real SQL database can be a key advantage.
I would like you to join me on our journey from a complex, multi instance Oracle topology to a single logical database in PostgreSQL. Each technology and architectural decision point will be discussed describing how we arrived at our destination. There are five keys areas that will be covered: - Target architecture - Migration of database objects (tables, indexes, views, synonyms, etc) - Migration of database code (packages, functions, procedures, triggers) - Application tier - Migration of Data - with minimal downtime during cutover The target architecture is a BDR cluster, where the physical data model and data stored is different between the logical standbys and the lead master/shadow master. Will discuss how this allowed for the simplification of the topology, and the benefits this delivered. Before you go there, yes I know PostgreSQL does no have synonyms, but an alternative approach was needed. There is a significant amount of business logic in the database tier all of which needed to be translated into database code. Will look at the tools and extensions available to reproduce the functionality in PostgreSQL. Look at common non-ISO standard SQL embedded in the application tier, along with jdbc challenges. Finally a look at some of the data movement tools available. Full disclosure, we are still on the journey but have learnt a lot on the way.
The proposed talk will go through several questions. The first obvious one is why would I bother learn CLI when I can do whatever I need with a GUI tool?
We'll try to answer why knowing CLI is a MUST for some people (like Postgres DBAs, for example) whereas it's only a bonus for others (like data scientists, for example).
Then we'll go through the basics about 101 (how to connect to, interactive mode versus not interactive more, how to set psql environment to work comfortably and so on...)
The last part will be about tips and tricks that will make anyone's journey with psql more effective and enjoyable. I'm looking for the "TIL" effect in people's eyes.
EDB 13 - New Enhancements for Security and Usability - APJEDB
Database security is always of paramount importance to all organizations. In this webinar, we will explore the security, usability, and portability updates of the latest version of the EDB database server and tools.
Join us in this webinar to learn:
- The new security features such as SCRAM and the encryption of database passwords and traffic between Failover Manager agents
- Usability updates that automate partitioning, verify backup integrity, and streamline the management of failover and backups
- Portability improvements that simplify running PostgreSQL across on-premise and cloud environments
Dans ce webinar, nous allons parler des différences entre une sauvegarde physique et une sauvegarde logique. Nous allons lister les avantages et inconvénients, les principales considérations et les outils disponibles pour les deux méthodes.
- Perte de données
- Exports logiques
- Standbys
- WALs et Recovery
- Snapshots VM/Disques
- Sauvegardes physique
- Conclusion
Vieni a scoprire Cloud Native PostgreSQL (CNP), l’operatore per Kubernetes, direttamente da coloro che lo hanno ideato e lo sviluppano in EDB.
CNP facilita l’integrazione di database PostgreSQL con le tue applicazioni all’interno di cluster Kubernetes e OpenShift Container Platform di RedHat, grazie alla sua gestione automatica dell’architettura primario/standby che include: self-healing, failover, switchover, rolling update, backup, ecc.
Durante il webinar affronteremo i seguenti punti:
- DevOps e Cloud Native
- Introduzione a Cloud Native PostgreSQL
- Architetture
- Caratteristiche principali
- Esempi di uso e configurazione
- Kubernetes, Storage e Postgres
- Demo
- Conclusioni
New enhancements for security and usability in EDB 13EDB
EDB 13 enhances our flagship database server and tools. This webinar will explore its security, usability, and portability updates. Join us to learn how EDB 13 can help you improve your PostgreSQL productivity and data protection.
Webinar highlights include:
- New security features such as SCRAM and the encryption of database passwords and traffic between Failover Manager agents
- Usability updates that automate partitioning, verify backup integrity and streamline the management of failover and backups
- Portability improvements that simplify running PostgreSQL across on-premise and cloud environments
The webinar will review a multi-layered framework for PostgreSQL security, with a deeper focus on limiting access to the database and data, as well as securing the data.
Using the popular AAA (Authentication, Authorization, Auditing) framework we will cover:
- Best practices for authentication (trust, certificate, MD5, Scram, etc).
- Advanced approaches, such as password profiles.
- Deep dive of authorization and data access control for roles, database objects (tables, etc), view usage, row-level security, and data redaction.
- Auditing, encryption, and SQL injection attack prevention.
Note: this session is delivered in German
Speaker:
Borys Neselovskyi, Sales Engineer, EDB
EDB Cloud Native Postgres includes database container images and a Kubernetes Operator that manage the lifecycle of a database from deployment to operations. This Kubernetes Operator for Postgres is written by EDB entirely from scratch in the Go language and relies exclusively on the Kubernetes API.
Attend this webinar to learn about:
- DevOps & Cloud Native
- Overview of Cloud Native Postgres
- Storage for Postgres workloads in Kubernetes
- Using Cloud Native Postgres
- Demo
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
How the Postgres Query Optimizer Works
1. Explaining the Postgres Query Optimizer
BRUCE MOMJIAN
January, 2015
The optimizer is the "brain" of the database, interpreting SQL
queries and determining the fastest method of execution. This
talk uses the EXPLAIN command to show how the optimizer
interprets queries and determines optimal execution.
Creative Commons Attribution License http://momjian.us/presentations
1 / 61
2. PostgreSQL the database…
◮ Open Source Object Relational DBMS since 1996
◮ Distributed under the PostgreSQL License
◮ Similar technical heritage as Oracle, SQL Server & DB2
◮ However, a strong adherence to standards (ANSI-SQL 2008)
◮ Highly extensible and adaptable design
◮ Languages, indexing, data types, etc.
◮ E.g. PostGIS, JSONB, SQL/MED
◮ Extensive use throughout the world for applications and
organizations of all types
◮ Bundled into Red Hat Enterprise Linux, Ubuntu, CentOS
and Amazon Linux
Explaining the Postgres Query Optimizer 2 / 61
3. PostgreSQL the community…
◮ Independent community led by a Core Team of six
◮ Large, active and vibrant community
◮ www.postgresql.org
◮ Downloads, Mailing lists, Documentation
◮ Sponsors sampler:
◮ Google, Red Hat, VMWare, Skype, Salesforce, HP and
EnterpriseDB
◮ http://www.postgresql.org/community/
Explaining the Postgres Query Optimizer 3 / 61
4. EnterpriseDB the company…
◮ The worldwide leader of Postgres based products and
services founded in 2004
◮ Customers include 50 of the Fortune 500 and 98 of the
Forbes Global 2000
◮ Enterprise offerings:
◮ PostgreSQL Support, Services and Training
◮ Postgres Plus Advanced Server with Oracle Compatibility
◮ Tools for Monitoring, Replication, HA, Backup & Recovery
Community
◮ Citizenship
◮ Contributor of key features: Materialized Views, JSON, &
more
◮ Nine community members on staff
Explaining the Postgres Query Optimizer 4 / 61
7. Postgres Query Execution
utility
Plan
Optimal Path
Query
Postmaster
Postgres Postgres
Libpq
Main
Generate Plan
Traffic Cop
Generate Paths
Execute Plan
e.g. CREATE TABLE, COPY
SELECT, INSERT, UPDATE, DELETE
Rewrite Query
Parse Statement
Utility
Command
Storage ManagersCatalogUtilities
Access Methods Nodes / Lists
Explaining the Postgres Query Optimizer 7 / 61
8. Postgres Query Execution
utility
Plan
Optimal Path
Query
Generate Plan
Traffic Cop
Generate Paths
Execute Plan
e.g. CREATE TABLE, COPY
SELECT, INSERT, UPDATE, DELETE
Rewrite Query
Parse Statement
Utility
Command
Explaining the Postgres Query Optimizer 8 / 61
9. The Optimizer Is the Brain
http://www.wsmanaging.com/
Explaining the Postgres Query Optimizer 9 / 61
10. What Decisions Does the Optimizer Have to Make?
◮ Scan Method
◮ Join Method
◮ Join Order
Explaining the Postgres Query Optimizer 10 / 61
11. Which Scan Method?
◮ Sequential Scan
◮ Bitmap Index Scan
◮ Index Scan
Explaining the Postgres Query Optimizer 11 / 61
12. A Simple Example Using pg_class.relname
SELECT relname
FROM pg_class
ORDER BY 1
LIMIT 8;
relname
-----------------------------------
_pg_foreign_data_wrappers
_pg_foreign_servers
_pg_user_mappings
administrable_role_authorizations
applicable_roles
attributes
check_constraint_routine_usage
check_constraints
(8 rows)
Explaining the Postgres Query Optimizer 12 / 61
13. Let’s Use Just the First Letter of pg_class.relname
SELECT substring(relname, 1, 1)
FROM pg_class
ORDER BY 1
LIMIT 8;
substring
-----------
_
_
_
a
a
a
c
c
(8 rows)
Explaining the Postgres Query Optimizer 13 / 61
14. Create a Temporary Table with an Index
CREATE TEMPORARY TABLE sample (letter, junk) AS
SELECT substring(relname, 1, 1), repeat(’x’, 250)
FROM pg_class
ORDER BY random(); -- add rows in random order
SELECT 253
CREATE INDEX i_sample on sample (letter);
CREATE INDEX
All the queries used in this presentation are available at
http://momjian.us/main/writings/pgsql/optimizer.sql.
Explaining the Postgres Query Optimizer 14 / 61
15. Create an EXPLAIN Function
CREATE OR REPLACE FUNCTION lookup_letter(text) RETURNS SETOF text AS $$
BEGIN
RETURN QUERY EXECUTE ’
EXPLAIN SELECT letter
FROM sample
WHERE letter = ’’’ || $1 || ’’’’;
END
$$ LANGUAGE plpgsql;
CREATE FUNCTION
Explaining the Postgres Query Optimizer 15 / 61
16. What is the Distribution of the sample Table?
WITH letters (letter, count) AS (
SELECT letter, COUNT(*)
FROM sample
GROUP BY 1
)
SELECT letter, count, (count * 100.0 / (SUM(count) OVER ()))::numeric(4,1) AS "%"
FROM letters
ORDER BY 2 DESC;
Explaining the Postgres Query Optimizer 16 / 61
17. What is the Distribution of the sample Table?
letter | count | %
--------+-------+------
p | 199 | 78.7
s | 9 | 3.6
c | 8 | 3.2
r | 7 | 2.8
t | 5 | 2.0
v | 4 | 1.6
f | 4 | 1.6
d | 4 | 1.6
u | 3 | 1.2
a | 3 | 1.2
_ | 3 | 1.2
e | 2 | 0.8
i | 1 | 0.4
k | 1 | 0.4
(14 rows)
Explaining the Postgres Query Optimizer 17 / 61
18. Is the Distribution Important?
EXPLAIN SELECT letter
FROM sample
WHERE letter = ’p’;
QUERY PLAN
------------------------------------------------------------------------
Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=32)
Index Cond: (letter = ’p’::text)
(2 rows)
Explaining the Postgres Query Optimizer 18 / 61
19. Is the Distribution Important?
EXPLAIN SELECT letter
FROM sample
WHERE letter = ’d’;
QUERY PLAN
------------------------------------------------------------------------
Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=32)
Index Cond: (letter = ’d’::text)
(2 rows)
Explaining the Postgres Query Optimizer 19 / 61
20. Is the Distribution Important?
EXPLAIN SELECT letter
FROM sample
WHERE letter = ’k’;
QUERY PLAN
------------------------------------------------------------------------
Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=32)
Index Cond: (letter = ’k’::text)
(2 rows)
Explaining the Postgres Query Optimizer 20 / 61
21. Running ANALYZE Causes
a Sequential Scan for a Common Value
ANALYZE sample;
ANALYZE
EXPLAIN SELECT letter
FROM sample
WHERE letter = ’p’;
QUERY PLAN
---------------------------------------------------------
Seq Scan on sample (cost=0.00..13.16 rows=199 width=2)
Filter: (letter = ’p’::text)
(2 rows)
Autovacuum cannot ANALYZE (or VACUUM) temporary tables because
these tables are only visible to the creating session.
Explaining the Postgres Query Optimizer 21 / 61
23. A Less Common Value Causes a Bitmap Index Scan
EXPLAIN SELECT letter
FROM sample
WHERE letter = ’d’;
QUERY PLAN
-----------------------------------------------------------------------
Bitmap Heap Scan on sample (cost=4.28..12.74 rows=4 width=2)
Recheck Cond: (letter = ’d’::text)
-> Bitmap Index Scan on i_sample (cost=0.00..4.28 rows=4 width=0)
Index Cond: (letter = ’d’::text)
(4 rows)
Explaining the Postgres Query Optimizer 23 / 61
24. Bitmap Index Scan
=&
Combined
’A’ AND ’NS’
1
0
1
0
TableIndex 1
col1 = ’A’
Index 2
1
0
0
col2 = ’NS’
1 0
1
0
0
Index
Explaining the Postgres Query Optimizer 24 / 61
25. An Even Rarer Value Causes an Index Scan
EXPLAIN SELECT letter
FROM sample
WHERE letter = ’k’;
QUERY PLAN
-----------------------------------------------------------------------
Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=2)
Index Cond: (letter = ’k’::text)
(2 rows)
Explaining the Postgres Query Optimizer 25 / 61
27. Let’s Look at All Values and their Effects
WITH letter (letter, count) AS (
SELECT letter, COUNT(*)
FROM sample
GROUP BY 1
)
SELECT letter AS l, count, lookup_letter(letter)
FROM letter
ORDER BY 2 DESC;
l | count | lookup_letter
---+-------+-----------------------------------------------------------------------
p | 199 | Seq Scan on sample (cost=0.00..13.16 rows=199 width=2)
p | 199 | Filter: (letter = ’p’::text)
s | 9 | Seq Scan on sample (cost=0.00..13.16 rows=9 width=2)
s | 9 | Filter: (letter = ’s’::text)
c | 8 | Seq Scan on sample (cost=0.00..13.16 rows=8 width=2)
c | 8 | Filter: (letter = ’c’::text)
r | 7 | Seq Scan on sample (cost=0.00..13.16 rows=7 width=2)
r | 7 | Filter: (letter = ’r’::text)
…
Explaining the Postgres Query Optimizer 27 / 61
28. OK, Just the First Lines
WITH letter (letter, count) AS (
SELECT letter, COUNT(*)
FROM sample
GROUP BY 1
)
SELECT letter AS l, count,
(SELECT *
FROM lookup_letter(letter) AS l2
LIMIT 1) AS lookup_letter
FROM letter
ORDER BY 2 DESC;
Explaining the Postgres Query Optimizer 28 / 61
29. Just the First EXPLAIN Lines
l | count | lookup_letter
---+-------+-----------------------------------------------------------------------
p | 199 | Seq Scan on sample (cost=0.00..13.16 rows=199 width=2)
s | 9 | Seq Scan on sample (cost=0.00..13.16 rows=9 width=2)
c | 8 | Seq Scan on sample (cost=0.00..13.16 rows=8 width=2)
r | 7 | Seq Scan on sample (cost=0.00..13.16 rows=7 width=2)
t | 5 | Bitmap Heap Scan on sample (cost=4.29..12.76 rows=5 width=2)
f | 4 | Bitmap Heap Scan on sample (cost=4.28..12.74 rows=4 width=2)
v | 4 | Bitmap Heap Scan on sample (cost=4.28..12.74 rows=4 width=2)
d | 4 | Bitmap Heap Scan on sample (cost=4.28..12.74 rows=4 width=2)
a | 3 | Bitmap Heap Scan on sample (cost=4.27..11.38 rows=3 width=2)
_ | 3 | Bitmap Heap Scan on sample (cost=4.27..11.38 rows=3 width=2)
u | 3 | Bitmap Heap Scan on sample (cost=4.27..11.38 rows=3 width=2)
e | 2 | Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=2)
i | 1 | Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=2)
k | 1 | Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=2)
(14 rows)
Explaining the Postgres Query Optimizer 29 / 61
30. We Can Force an Index Scan
SET enable_seqscan = false;
SET enable_bitmapscan = false;
WITH letter (letter, count) AS (
SELECT letter, COUNT(*)
FROM sample
GROUP BY 1
)
SELECT letter AS l, count,
(SELECT *
FROM lookup_letter(letter) AS l2
LIMIT 1) AS lookup_letter
FROM letter
ORDER BY 2 DESC;
Explaining the Postgres Query Optimizer 30 / 61
31. Notice the High Cost for Common Values
l | count | lookup_letter
---+-------+-----------------------------------------------------------------------
p | 199 | Index Scan using i_sample on sample (cost=0.00..39.33 rows=199 width=
s | 9 | Index Scan using i_sample on sample (cost=0.00..22.14 rows=9 width=2)
c | 8 | Index Scan using i_sample on sample (cost=0.00..19.84 rows=8 width=2)
r | 7 | Index Scan using i_sample on sample (cost=0.00..19.82 rows=7 width=2)
t | 5 | Index Scan using i_sample on sample (cost=0.00..15.21 rows=5 width=2)
d | 4 | Index Scan using i_sample on sample (cost=0.00..15.19 rows=4 width=2)
v | 4 | Index Scan using i_sample on sample (cost=0.00..15.19 rows=4 width=2)
f | 4 | Index Scan using i_sample on sample (cost=0.00..15.19 rows=4 width=2)
_ | 3 | Index Scan using i_sample on sample (cost=0.00..12.88 rows=3 width=2)
a | 3 | Index Scan using i_sample on sample (cost=0.00..12.88 rows=3 width=2)
u | 3 | Index Scan using i_sample on sample (cost=0.00..12.88 rows=3 width=2)
e | 2 | Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=2)
i | 1 | Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=2)
k | 1 | Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=2)
(14 rows)
RESET ALL;
RESET
Explaining the Postgres Query Optimizer 31 / 61
32. This Was the Optimizer’s Preference
l | count | lookup_letter
---+-------+-----------------------------------------------------------------------
p | 199 | Seq Scan on sample (cost=0.00..13.16 rows=199 width=2)
s | 9 | Seq Scan on sample (cost=0.00..13.16 rows=9 width=2)
c | 8 | Seq Scan on sample (cost=0.00..13.16 rows=8 width=2)
r | 7 | Seq Scan on sample (cost=0.00..13.16 rows=7 width=2)
t | 5 | Bitmap Heap Scan on sample (cost=4.29..12.76 rows=5 width=2)
f | 4 | Bitmap Heap Scan on sample (cost=4.28..12.74 rows=4 width=2)
v | 4 | Bitmap Heap Scan on sample (cost=4.28..12.74 rows=4 width=2)
d | 4 | Bitmap Heap Scan on sample (cost=4.28..12.74 rows=4 width=2)
a | 3 | Bitmap Heap Scan on sample (cost=4.27..11.38 rows=3 width=2)
_ | 3 | Bitmap Heap Scan on sample (cost=4.27..11.38 rows=3 width=2)
u | 3 | Bitmap Heap Scan on sample (cost=4.27..11.38 rows=3 width=2)
e | 2 | Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=2)
i | 1 | Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=2)
k | 1 | Index Scan using i_sample on sample (cost=0.00..8.27 rows=1 width=2)
(14 rows)
Explaining the Postgres Query Optimizer 32 / 61
33. Which Join Method?
◮ Nested Loop
◮ With Inner Sequential Scan
◮ With Inner Index Scan
◮ Hash Join
◮ Merge Join
Explaining the Postgres Query Optimizer 33 / 61
34. What Is in pg_proc.oid?
SELECT oid
FROM pg_proc
ORDER BY 1
LIMIT 8;
oid
-----
31
33
34
35
38
39
40
41
(8 rows)
Explaining the Postgres Query Optimizer 34 / 61
35. Create Temporary Tables
from pg_proc and pg_class
CREATE TEMPORARY TABLE sample1 (id, junk) AS
SELECT oid, repeat(’x’, 250)
FROM pg_proc
ORDER BY random(); -- add rows in random order
SELECT 2256
CREATE TEMPORARY TABLE sample2 (id, junk) AS
SELECT oid, repeat(’x’, 250)
FROM pg_class
ORDER BY random(); -- add rows in random order
SELECT 260
These tables have no indexes and no optimizer statistics.
Explaining the Postgres Query Optimizer 35 / 61
36. Join the Two Tables
with a Tight Restriction
EXPLAIN SELECT sample2.junk
FROM sample1 JOIN sample2 ON (sample1.id = sample2.id)
WHERE sample1.id = 33;
QUERY PLAN
---------------------------------------------------------------------
Nested Loop (cost=0.00..234.68 rows=300 width=32)
-> Seq Scan on sample1 (cost=0.00..205.54 rows=50 width=4)
Filter: (id = 33::oid)
-> Materialize (cost=0.00..25.41 rows=6 width=36)
-> Seq Scan on sample2 (cost=0.00..25.38 rows=6 width=36)
Filter: (id = 33::oid)
(6 rows)
Explaining the Postgres Query Optimizer 36 / 61
37. Nested Loop Join
with Inner Sequential Scan
aag
aar
aay aag
aas
aar
aaa
aay
aai
aag
No Setup Required
aai
Used For Small Tables
Outer Inner
Explaining the Postgres Query Optimizer 37 / 61
38. Pseudocode for Nested Loop Join
with Inner Sequential Scan
for (i = 0; i < length(outer); i++)
for (j = 0; j < length(inner); j++)
if (outer[i] == inner[j])
output(outer[i], inner[j]);
Explaining the Postgres Query Optimizer 38 / 61
39. Join the Two Tables with a Looser Restriction
EXPLAIN SELECT sample1.junk
FROM sample1 JOIN sample2 ON (sample1.id = sample2.id)
WHERE sample2.id > 33;
QUERY PLAN
----------------------------------------------------------------------
Hash Join (cost=30.50..950.88 rows=20424 width=32)
Hash Cond: (sample1.id = sample2.id)
-> Seq Scan on sample1 (cost=0.00..180.63 rows=9963 width=36)
-> Hash (cost=25.38..25.38 rows=410 width=4)
-> Seq Scan on sample2 (cost=0.00..25.38 rows=410 width=4)
Filter: (id > 33::oid)
(6 rows)
Explaining the Postgres Query Optimizer 39 / 61
40. Hash Join
Hashed
Must fit in Main Memory
aak
aar
aak
aay aaraam
aao aaw
aay
aag
aas
Outer Inner
Explaining the Postgres Query Optimizer 40 / 61
41. Pseudocode for Hash Join
for (j = 0; j < length(inner); j++)
hash_key = hash(inner[j]);
append(hash_store[hash_key], inner[j]);
for (i = 0; i < length(outer); i++)
hash_key = hash(outer[i]);
for (j = 0; j < length(hash_store[hash_key]); j++)
if (outer[i] == hash_store[hash_key][j])
output(outer[i], inner[j]);
Explaining the Postgres Query Optimizer 41 / 61
42. Join the Two Tables with No Restriction
EXPLAIN SELECT sample1.junk
FROM sample1 JOIN sample2 ON (sample1.id = sample2.id);
QUERY PLAN
-------------------------------------------------------------------------
Merge Join (cost=927.72..1852.95 rows=61272 width=32)
Merge Cond: (sample2.id = sample1.id)
-> Sort (cost=85.43..88.50 rows=1230 width=4)
Sort Key: sample2.id
-> Seq Scan on sample2 (cost=0.00..22.30 rows=1230 width=4)
-> Sort (cost=842.29..867.20 rows=9963 width=36)
Sort Key: sample1.id
-> Seq Scan on sample1 (cost=0.00..180.63 rows=9963 width=36)
(8 rows)
Explaining the Postgres Query Optimizer 42 / 61
43. Merge Join
Sorted
Sorted
Ideal for Large Tables
An Index Can Be Used to Eliminate the Sort
aaa
aab
aac
aad
aaa
aab
aab
aaf
aaf
aac
aae
Outer Inner
Explaining the Postgres Query Optimizer 43 / 61
44. Pseudocode for Merge Join
sort(outer);
sort(inner);
i = 0;
j = 0;
save_j = 0;
while (i < length(outer))
if (outer[i] == inner[j])
output(outer[i], inner[j]);
if (outer[i] <= inner[j] && j < length(inner))
j++;
if (outer[i] < inner[j])
save_j = j;
else
i++;
j = save_j;
Explaining the Postgres Query Optimizer 44 / 61
45. Order of Joined Relations Is Insignificant
EXPLAIN SELECT sample2.junk
FROM sample2 JOIN sample1 ON (sample2.id = sample1.id);
QUERY PLAN
------------------------------------------------------------------------
Merge Join (cost=927.72..1852.95 rows=61272 width=32)
Merge Cond: (sample2.id = sample1.id)
-> Sort (cost=85.43..88.50 rows=1230 width=36)
Sort Key: sample2.id
-> Seq Scan on sample2 (cost=0.00..22.30 rows=1230 width=36)
-> Sort (cost=842.29..867.20 rows=9963 width=4)
Sort Key: sample1.id
-> Seq Scan on sample1 (cost=0.00..180.63 rows=9963 width=4)
(8 rows)
The most restrictive relation, e.g. sample2, is always on the outer side of
merge joins. All previous merge joins also had sample2 in outer position.
Explaining the Postgres Query Optimizer 45 / 61
47. This Was a Merge Join without Optimizer Statistics
EXPLAIN SELECT sample2.junk
FROM sample1 JOIN sample2 ON (sample1.id = sample2.id);
QUERY PLAN
------------------------------------------------------------------------
Hash Join (cost=15.85..130.47 rows=260 width=254)
Hash Cond: (sample1.id = sample2.id)
-> Seq Scan on sample1 (cost=0.00..103.56 rows=2256 width=4)
-> Hash (cost=12.60..12.60 rows=260 width=258)
-> Seq Scan on sample2 (cost=0.00..12.60 rows=260 width=258)
(5 rows)
Explaining the Postgres Query Optimizer 47 / 61
48. Outer Joins Can Affect Optimizer Join Usage
EXPLAIN SELECT sample1.junk
FROM sample1 RIGHT OUTER JOIN sample2 ON (sample1.id = sample2.id);
QUERY PLAN
--------------------------------------------------------------------------
Hash Left Join (cost=131.76..148.26 rows=260 width=254)
Hash Cond: (sample2.id = sample1.id)
-> Seq Scan on sample2 (cost=0.00..12.60 rows=260 width=4)
-> Hash (cost=103.56..103.56 rows=2256 width=258)
-> Seq Scan on sample1 (cost=0.00..103.56 rows=2256 width=258)
(5 rows)
Use of hashes for outer joins was added in Postgres 9.1.
Explaining the Postgres Query Optimizer 48 / 61
49. Cross Joins Are Nested Loop Joins
without Join Restriction
EXPLAIN SELECT sample1.junk
FROM sample1 CROSS JOIN sample2;
QUERY PLAN
----------------------------------------------------------------------
Nested Loop (cost=0.00..7448.81 rows=586560 width=254)
-> Seq Scan on sample1 (cost=0.00..103.56 rows=2256 width=254)
-> Materialize (cost=0.00..13.90 rows=260 width=0)
-> Seq Scan on sample2 (cost=0.00..12.60 rows=260 width=0)
(4 rows)
Explaining the Postgres Query Optimizer 49 / 61
50. Create Indexes
CREATE INDEX i_sample1 on sample1 (id);
CREATE INDEX i_sample2 on sample2 (id);
Explaining the Postgres Query Optimizer 50 / 61
51. Nested Loop with Inner Index Scan Now Possible
EXPLAIN SELECT sample2.junk
FROM sample1 JOIN sample2 ON (sample1.id = sample2.id)
WHERE sample1.id = 33;
QUERY PLAN
---------------------------------------------------------------------------------
Nested Loop (cost=0.00..16.55 rows=1 width=254)
-> Index Scan using i_sample1 on sample1 (cost=0.00..8.27 rows=1 width=4)
Index Cond: (id = 33::oid)
-> Index Scan using i_sample2 on sample2 (cost=0.00..8.27 rows=1 width=258)
Index Cond: (sample2.id = 33::oid)
(5 rows)
Explaining the Postgres Query Optimizer 51 / 61
52. Nested Loop Join with Inner Index Scan
aag
aar
aai
aay aag
aas
aar
aaa
aay
aai
aag
No Setup Required
Index Lookup
Index Must Already Exist
Outer Inner
Explaining the Postgres Query Optimizer 52 / 61
53. Pseudocode for Nested Loop Join
with Inner Index Scan
for (i = 0; i < length(outer); i++)
index_entry = get_first_match(outer[j])
while (index_entry)
output(outer[i], inner[index_entry]);
index_entry = get_next_match(index_entry);
Explaining the Postgres Query Optimizer 53 / 61
54. Query Restrictions Affect Join Usage
EXPLAIN SELECT sample2.junk
FROM sample1 JOIN sample2 ON (sample1.id = sample2.id)
WHERE sample2.junk ˜ ’^aaa’;
QUERY PLAN
-------------------------------------------------------------------------------
Nested Loop (cost=0.00..21.53 rows=1 width=254)
-> Seq Scan on sample2 (cost=0.00..13.25 rows=1 width=258)
Filter: (junk ˜ ’^aaa’::text)
-> Index Scan using i_sample1 on sample1 (cost=0.00..8.27 rows=1 width=4)
Index Cond: (sample1.id = sample2.id)
(5 rows)
No junk rows begin with ’aaa’.
Explaining the Postgres Query Optimizer 54 / 61
55. All ’junk’ Columns Begin with ’xxx’
EXPLAIN SELECT sample2.junk
FROM sample1 JOIN sample2 ON (sample1.id = sample2.id)
WHERE sample2.junk ˜ ’^xxx’;
QUERY PLAN
------------------------------------------------------------------------
Hash Join (cost=16.50..131.12 rows=260 width=254)
Hash Cond: (sample1.id = sample2.id)
-> Seq Scan on sample1 (cost=0.00..103.56 rows=2256 width=4)
-> Hash (cost=13.25..13.25 rows=260 width=258)
-> Seq Scan on sample2 (cost=0.00..13.25 rows=260 width=258)
Filter: (junk ˜ ’^xxx’::text)
(6 rows)
Hash join was chosen because many more rows are expected. The
smaller table, e.g. sample2, is always hashed.
Explaining the Postgres Query Optimizer 55 / 61
56. Without LIMIT, Hash Is Used
for this Unrestricted Join
EXPLAIN SELECT sample2.junk
FROM sample1 JOIN sample2 ON (sample1.id = sample2.id);
QUERY PLAN
------------------------------------------------------------------------
Hash Join (cost=15.85..130.47 rows=260 width=254)
Hash Cond: (sample1.id = sample2.id)
-> Seq Scan on sample1 (cost=0.00..103.56 rows=2256 width=4)
-> Hash (cost=12.60..12.60 rows=260 width=258)
-> Seq Scan on sample2 (cost=0.00..12.60 rows=260 width=258)
(5 rows)
Explaining the Postgres Query Optimizer 56 / 61
57. LIMIT Can Affect Join Usage
EXPLAIN SELECT sample2.id, sample2.junk
FROM sample1 JOIN sample2 ON (sample1.id = sample2.id)
ORDER BY 1
LIMIT 1;
QUERY PLAN
------------------------------------------------------------------------------------------
Limit (cost=0.00..1.83 rows=1 width=258)
-> Nested Loop (cost=0.00..477.02 rows=260 width=258)
-> Index Scan using i_sample2 on sample2 (cost=0.00..52.15 rows=260 width=258)
-> Index Scan using i_sample1 on sample1 (cost=0.00..1.62 rows=1 width=4)
Index Cond: (sample1.id = sample2.id)
(5 rows)
Explaining the Postgres Query Optimizer 57 / 61
58. LIMIT 10
EXPLAIN SELECT sample2.id, sample2.junk
FROM sample1 JOIN sample2 ON (sample1.id = sample2.id)
ORDER BY 1
LIMIT 10;
QUERY PLAN
------------------------------------------------------------------------------------------
Limit (cost=0.00..18.35 rows=10 width=258)
-> Nested Loop (cost=0.00..477.02 rows=260 width=258)
-> Index Scan using i_sample2 on sample2 (cost=0.00..52.15 rows=260 width=258)
-> Index Scan using i_sample1 on sample1 (cost=0.00..1.62 rows=1 width=4)
Index Cond: (sample1.id = sample2.id)
(5 rows)
Explaining the Postgres Query Optimizer 58 / 61
59. LIMIT 100 Switches to Hash Join
EXPLAIN SELECT sample2.id, sample2.junk
FROM sample1 JOIN sample2 ON (sample1.id = sample2.id)
ORDER BY 1
LIMIT 100;
QUERY PLAN
------------------------------------------------------------------------------------
Limit (cost=140.41..140.66 rows=100 width=258)
-> Sort (cost=140.41..141.06 rows=260 width=258)
Sort Key: sample2.id
-> Hash Join (cost=15.85..130.47 rows=260 width=258)
Hash Cond: (sample1.id = sample2.id)
-> Seq Scan on sample1 (cost=0.00..103.56 rows=2256 width=4)
-> Hash (cost=12.60..12.60 rows=260 width=258)
-> Seq Scan on sample2 (cost=0.00..12.60 rows=260 width=258)
(8 rows)
Explaining the Postgres Query Optimizer 59 / 61