The document discusses how database optimizers can sometimes provide incorrect cardinality estimates that result in inefficient query plans. It provides four examples of cardinality errors caused by uneven data distributions. The key strategies for addressing cardinality problems are: 1) giving the optimizer more statistical information through histograms and SQL profiles, 2) overriding optimizer decisions with hints, and 3) changing the application design/data model. Providing more information to the optimizer usually improves plans without additional code changes.
Managing Statistics for Optimal Query PerformanceKaren Morton
Half the battle of writing good SQL is in understanding how the Oracle query optimizer analyzes your code and applies statistics in order to derive the “best” execution plan. The other half of the battle is successfully applying that knowledge to the databases that you manage. The optimizer uses statistics as input to develop query execution plans, and so these statistics are the foundation of good plans. If the statistics supplied aren’t representative of your actual data, you can expect bad plans. However, if the statistics are representative of your data, then the optimizer will probably choose an optimal plan.
The document discusses MySQL data manipulation commands. It provides examples of using SELECT statements to retrieve data from tables based on specified criteria, INSERT statements to add new data to tables, UPDATE statements to modify existing data in tables, and the basic syntax for these commands. It also reviews naming conventions and some best practices for working with tables in MySQL.
Paul Guerin is an OCP Meetup presenter who has worked as a DBA at Origin Energy for 3.5 years. He discusses different types of access paths that a database query optimizer can use to retrieve data from a database, including full table scans, index scans using rowids, unique index scans, range index scans, skip index scans, and full index scans. He provides examples of how the optimizer chooses between these access paths based on factors like indexes available and estimated execution costs.
SQL Plan Management with Oracle Database provides tools to manage SQL performance and stability. It includes SQL profiles, stored outlines, SQL patches, and SQL baselines that can capture and enforce execution plans. New features in Oracle 12c include adaptive plans, which automatically choose join methods and parallel distribution, as well as adaptive statistics using dynamic sampling to improve cardinality estimates. Bind variable peeking and cardinality feedback also help the optimizer select optimal plans.
This document provides 9 hints for optimizing Oracle database performance:
1. Take a methodical and empirical approach to tuning by focusing on root causes, measuring performance before and after changes, and avoiding "silver bullets".
2. Design databases and applications with performance in mind from the beginning.
3. Index wisely by only creating useful indexes that improve performance without excessive overhead.
4. Leverage built-in Oracle tools like DBMS_XPLAN and SQL Trace to measure performance.
5. Tune the optimizer by adjusting parameters and statistics to encourage better execution plans.
6. Focus SQL and PL/SQL tuning on problem queries, joins, sorts, and DML statements.
7. Address
The document discusses Oracle database performance tuning. It covers identifying and resolving performance issues through tools like AWR and ASH reports. Common causes of performance problems include wait events, old statistics, incorrect execution plans, and I/O issues. The document recommends collecting specific data when analyzing problems and provides references and scripts for further tuning tasks.
The final part of the SQL Tuning workshop focuses on applying the techniques discussed in the previous sections to help diagnose and correct a number of problematic SQL statements and shows how you can use SQL Plan Management or a SQL Patch to influence an execution plan.
Managing Statistics for Optimal Query PerformanceKaren Morton
Half the battle of writing good SQL is in understanding how the Oracle query optimizer analyzes your code and applies statistics in order to derive the “best” execution plan. The other half of the battle is successfully applying that knowledge to the databases that you manage. The optimizer uses statistics as input to develop query execution plans, and so these statistics are the foundation of good plans. If the statistics supplied aren’t representative of your actual data, you can expect bad plans. However, if the statistics are representative of your data, then the optimizer will probably choose an optimal plan.
The document discusses MySQL data manipulation commands. It provides examples of using SELECT statements to retrieve data from tables based on specified criteria, INSERT statements to add new data to tables, UPDATE statements to modify existing data in tables, and the basic syntax for these commands. It also reviews naming conventions and some best practices for working with tables in MySQL.
Paul Guerin is an OCP Meetup presenter who has worked as a DBA at Origin Energy for 3.5 years. He discusses different types of access paths that a database query optimizer can use to retrieve data from a database, including full table scans, index scans using rowids, unique index scans, range index scans, skip index scans, and full index scans. He provides examples of how the optimizer chooses between these access paths based on factors like indexes available and estimated execution costs.
SQL Plan Management with Oracle Database provides tools to manage SQL performance and stability. It includes SQL profiles, stored outlines, SQL patches, and SQL baselines that can capture and enforce execution plans. New features in Oracle 12c include adaptive plans, which automatically choose join methods and parallel distribution, as well as adaptive statistics using dynamic sampling to improve cardinality estimates. Bind variable peeking and cardinality feedback also help the optimizer select optimal plans.
This document provides 9 hints for optimizing Oracle database performance:
1. Take a methodical and empirical approach to tuning by focusing on root causes, measuring performance before and after changes, and avoiding "silver bullets".
2. Design databases and applications with performance in mind from the beginning.
3. Index wisely by only creating useful indexes that improve performance without excessive overhead.
4. Leverage built-in Oracle tools like DBMS_XPLAN and SQL Trace to measure performance.
5. Tune the optimizer by adjusting parameters and statistics to encourage better execution plans.
6. Focus SQL and PL/SQL tuning on problem queries, joins, sorts, and DML statements.
7. Address
The document discusses Oracle database performance tuning. It covers identifying and resolving performance issues through tools like AWR and ASH reports. Common causes of performance problems include wait events, old statistics, incorrect execution plans, and I/O issues. The document recommends collecting specific data when analyzing problems and provides references and scripts for further tuning tasks.
The final part of the SQL Tuning workshop focuses on applying the techniques discussed in the previous sections to help diagnose and correct a number of problematic SQL statements and shows how you can use SQL Plan Management or a SQL Patch to influence an execution plan.
The document discusses adaptive query optimization in Oracle 12c. Key points include:
- In 12c, adaptive plans allow the execution plan to change at runtime based on statistics collected, such as switching from a hash join to a nested loops join.
- During the first execution, a statistics collector is inserted and the plan is changed. SQL plan directives are then created.
- For subsequent executions, the information from the initial execution is used to automatically re-optimize the plan, improving performance over time.
The document discusses execution plans in Oracle, including what they are, how to view them using tools like DBMS_XPLAN, details contained in plans and how to interpret them, tips for tuning plans such as gathering statistics and adding indexes, and provides an example case study of tuning a SQL statement that was performing a full table scan through the use of indexes.
This document discusses PostgreSQL query optimization techniques. It covers identifying slow queries, understanding query plans, and provides examples of optimizations like adding indexes and changing query structures. The key steps are finding queries to optimize using tools like EXPLAIN and pg_stat_statements, analyzing queries and plans to understand performance bottlenecks, and then making changes like creating indexes, restructuring queries, and adjusting configuration settings to improve performance.
1. The document discusses using graphics and data visualization to improve understanding of database performance issues and SQL tuning. It provides examples of how visualizations can clearly show relationships in complex SQL queries and data that are difficult to understand from text or code alone.
2. Key steps in visual SQL tuning are laid out, including drawing tables as nodes, joins as connection lines, and filters as markings on tables. This helps identify optimization opportunities like missing indexes or stale statistics.
3. The document emphasizes that a lack of clarity in visualizing complex data and queries can have devastating consequences, while graphics enable easy understanding and effective problem-solving.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
Big Data Analytics with MariaDB ColumnStoreMariaDB plc
Big Data Analytics with MariaDB ColumnStore provides an overview of MariaDB ColumnStore. Key points include:
- MariaDB ColumnStore is an open source columnar storage engine that provides high performance analytics on large datasets in a scalable distributed environment using standard SQL.
- Columnar storage organizes data by columns rather than rows, improving query performance by only accessing relevant columns. It supports workloads from terabytes to petabytes of data.
- Common use cases include data warehousing, financial services, healthcare, telecom, and any workload requiring analysis of millions to billions of rows.
- The architecture employs a distributed query processing model with horizontal partitioning and parallel query execution across nodes for high scalability
This document provides an overview of PostgreSQL topics including:
- Installation and configuration best practices such as using package management and configuring logging
- Routine maintenance activities like vacuuming and backups
- Upgrades and the differences between major, minor, and bugfix versions
- Advanced SQL topics like window functions, common table expressions, and querying slow queries
The document discusses various techniques for optimizing query performance in MySQL, including using indexes appropriately, avoiding full table scans, and tools like EXPLAIN, Performance Schema, and pt-query-digest for analyzing queries and identifying optimization opportunities. It provides recommendations for index usage, covering indexes, sorting and joins, and analyzing slow queries.
Performance Schema for MySQL TroubleshootingSveta Smirnova
The Performance Schema provides detailed information for troubleshooting and optimizing MySQL. It collects instrumentation data on server operations, statements, memory usage, locks and connections. The data can be used to identify slow queries, statements not using indexes, memory consumption trends over time, and more. Configuration and enabling specific instruments allows controlling the level of detail collected.
This document discusses various Oracle SQL concepts including query optimization, execution plans, joins, indexes, and full table scans. It provides guidance on understanding how Oracle processes and executes SQL queries, the importance of statistics and selectivity, and techniques for writing efficient queries such as predicate pushing and query transformations. The goal is to help readers gain a conceptual understanding of Oracle's internals to formulate more efficient SQL.
Antes de migrar de 10g a 11g o 12c, tome en cuenta las siguientes consideraciones. No es tan sencillo como simplemente cambiar de motor de base de datos, se necesita hacer consideraciones a nivel del aplicativo.
A few things about the Oracle optimizer - 2013Connor McDonald
The document discusses how using the wrong data types for columns in a database table can negatively impact performance and data integrity. It shows examples of creating a table with date, string, and number columns using implicit data type conversions and the problems this causes for indexing, statistics gathering, and query optimization. Maintaining the correct data types is important for the optimizer to choose efficient execution plans and for the database to properly enforce data constraints.
The document discusses adaptive query optimization in Oracle 12c. It begins by describing drawbacks of the optimizer in pre-12c versions, such as insufficient statistics triggering dynamic sampling. It then outlines the key features of adaptive query optimization in 12c, including adaptive/dynamic plans using techniques like adaptive parallel distribution and adaptive joins. It also discusses automatic re-optimization using feedback from initial executions. The document provides illustrations of these techniques using example queries and optimizer statistics.
The document discusses strategies for optimizing queries by shaping the optimizer's search space. It recommends:
1. Maximizing data locality by using basic B-tree indexes rather than more complex options like partitions or clusters.
2. Writing queries to explicitly exploit indexes by using range conditions, ordering results to match the index order, and terminating scans after a specified number of rows.
3. Ordering columns in multi-column indexes to match the predicates in common queries, with equality conditions before range conditions.
This document discusses Oracle query optimizer concepts like selectivity, cardinality, and object statistics. It provides examples of how the optimizer estimates cardinality based on statistics values like number of rows, distinct values, density and nulls. It also shows how index statistics like clustering factor, leaf blocks impact the choice between an index scan or full table scan.
The document summarizes how SQL Plan Directives in Oracle 12c can help address issues caused by cardinality misestimation in the optimizer. It provides an example where the optimizer underestimates the number of rows returned by a query on a table due to not having statistics on correlated columns. In 12c, a SQL Plan Directive is automatically generated after the first execution to capture this misestimation. On subsequent queries, the directive can be used to provide more accurate cardinality estimates through automatic reoptimization or dynamic sampling.
Dbms plan - A swiss army knife for performance engineersRiyaj Shamsudeen
This document discusses dbms_xplan, a tool for performance engineers to analyze execution plans. It provides options for displaying plans from the plan table, shared SQL area in memory, and AWR history. Dbms_xplan provides more detailed information than traditional tools like tkprof, including predicates, notes, bind values, and plan history. It requires privileges to access dictionary views for displaying plans from memory and AWR. The document also demonstrates usage examples and output formats for dbms_xplan.analyze.
The optimizer is the "brain" of the database, interpreting SQL queries and determining the fastest method of execution. In this talk by Bruce Momjian, Senior Database Architect at EnterpriseDB and co-founder of the PostgreSQL Global Development Group, uses the explain command to show how the optimizer interprets queries and determines optimal execution. The talk aimed at assisting developers and administrators in understanding how Postgres optimally executes their queries and what steps they can take to understand and perhaps improve its behavior.
The document discusses Oracle's Automatic Workload Repository (AWR) and how it can be used to analyze database performance issues. It provides an overview of AWR basics and functionality, walks through analyzing an AWR report including a real-world case study of identifying a performance regression, and discusses AWR administration and diagnostics.
The document discusses memory usage in Linux systems. It begins by describing the boot process and how the kernel loads into memory. It then explains that Linux uses virtual memory, where each process has its own virtual address space. Physical memory is limited by factors like CPU architecture and motherboard configuration. When an executable runs, not all of its code is loaded; dynamic libraries it depends on are mapped into its address space as well, significantly increasing its memory usage compared to the executable size alone.
The document discusses adaptive query optimization in Oracle 12c. Key points include:
- In 12c, adaptive plans allow the execution plan to change at runtime based on statistics collected, such as switching from a hash join to a nested loops join.
- During the first execution, a statistics collector is inserted and the plan is changed. SQL plan directives are then created.
- For subsequent executions, the information from the initial execution is used to automatically re-optimize the plan, improving performance over time.
The document discusses execution plans in Oracle, including what they are, how to view them using tools like DBMS_XPLAN, details contained in plans and how to interpret them, tips for tuning plans such as gathering statistics and adding indexes, and provides an example case study of tuning a SQL statement that was performing a full table scan through the use of indexes.
This document discusses PostgreSQL query optimization techniques. It covers identifying slow queries, understanding query plans, and provides examples of optimizations like adding indexes and changing query structures. The key steps are finding queries to optimize using tools like EXPLAIN and pg_stat_statements, analyzing queries and plans to understand performance bottlenecks, and then making changes like creating indexes, restructuring queries, and adjusting configuration settings to improve performance.
1. The document discusses using graphics and data visualization to improve understanding of database performance issues and SQL tuning. It provides examples of how visualizations can clearly show relationships in complex SQL queries and data that are difficult to understand from text or code alone.
2. Key steps in visual SQL tuning are laid out, including drawing tables as nodes, joins as connection lines, and filters as markings on tables. This helps identify optimization opportunities like missing indexes or stale statistics.
3. The document emphasizes that a lack of clarity in visualizing complex data and queries can have devastating consequences, while graphics enable easy understanding and effective problem-solving.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
Big Data Analytics with MariaDB ColumnStoreMariaDB plc
Big Data Analytics with MariaDB ColumnStore provides an overview of MariaDB ColumnStore. Key points include:
- MariaDB ColumnStore is an open source columnar storage engine that provides high performance analytics on large datasets in a scalable distributed environment using standard SQL.
- Columnar storage organizes data by columns rather than rows, improving query performance by only accessing relevant columns. It supports workloads from terabytes to petabytes of data.
- Common use cases include data warehousing, financial services, healthcare, telecom, and any workload requiring analysis of millions to billions of rows.
- The architecture employs a distributed query processing model with horizontal partitioning and parallel query execution across nodes for high scalability
This document provides an overview of PostgreSQL topics including:
- Installation and configuration best practices such as using package management and configuring logging
- Routine maintenance activities like vacuuming and backups
- Upgrades and the differences between major, minor, and bugfix versions
- Advanced SQL topics like window functions, common table expressions, and querying slow queries
The document discusses various techniques for optimizing query performance in MySQL, including using indexes appropriately, avoiding full table scans, and tools like EXPLAIN, Performance Schema, and pt-query-digest for analyzing queries and identifying optimization opportunities. It provides recommendations for index usage, covering indexes, sorting and joins, and analyzing slow queries.
Performance Schema for MySQL TroubleshootingSveta Smirnova
The Performance Schema provides detailed information for troubleshooting and optimizing MySQL. It collects instrumentation data on server operations, statements, memory usage, locks and connections. The data can be used to identify slow queries, statements not using indexes, memory consumption trends over time, and more. Configuration and enabling specific instruments allows controlling the level of detail collected.
This document discusses various Oracle SQL concepts including query optimization, execution plans, joins, indexes, and full table scans. It provides guidance on understanding how Oracle processes and executes SQL queries, the importance of statistics and selectivity, and techniques for writing efficient queries such as predicate pushing and query transformations. The goal is to help readers gain a conceptual understanding of Oracle's internals to formulate more efficient SQL.
Antes de migrar de 10g a 11g o 12c, tome en cuenta las siguientes consideraciones. No es tan sencillo como simplemente cambiar de motor de base de datos, se necesita hacer consideraciones a nivel del aplicativo.
A few things about the Oracle optimizer - 2013Connor McDonald
The document discusses how using the wrong data types for columns in a database table can negatively impact performance and data integrity. It shows examples of creating a table with date, string, and number columns using implicit data type conversions and the problems this causes for indexing, statistics gathering, and query optimization. Maintaining the correct data types is important for the optimizer to choose efficient execution plans and for the database to properly enforce data constraints.
The document discusses adaptive query optimization in Oracle 12c. It begins by describing drawbacks of the optimizer in pre-12c versions, such as insufficient statistics triggering dynamic sampling. It then outlines the key features of adaptive query optimization in 12c, including adaptive/dynamic plans using techniques like adaptive parallel distribution and adaptive joins. It also discusses automatic re-optimization using feedback from initial executions. The document provides illustrations of these techniques using example queries and optimizer statistics.
The document discusses strategies for optimizing queries by shaping the optimizer's search space. It recommends:
1. Maximizing data locality by using basic B-tree indexes rather than more complex options like partitions or clusters.
2. Writing queries to explicitly exploit indexes by using range conditions, ordering results to match the index order, and terminating scans after a specified number of rows.
3. Ordering columns in multi-column indexes to match the predicates in common queries, with equality conditions before range conditions.
This document discusses Oracle query optimizer concepts like selectivity, cardinality, and object statistics. It provides examples of how the optimizer estimates cardinality based on statistics values like number of rows, distinct values, density and nulls. It also shows how index statistics like clustering factor, leaf blocks impact the choice between an index scan or full table scan.
The document summarizes how SQL Plan Directives in Oracle 12c can help address issues caused by cardinality misestimation in the optimizer. It provides an example where the optimizer underestimates the number of rows returned by a query on a table due to not having statistics on correlated columns. In 12c, a SQL Plan Directive is automatically generated after the first execution to capture this misestimation. On subsequent queries, the directive can be used to provide more accurate cardinality estimates through automatic reoptimization or dynamic sampling.
Dbms plan - A swiss army knife for performance engineersRiyaj Shamsudeen
This document discusses dbms_xplan, a tool for performance engineers to analyze execution plans. It provides options for displaying plans from the plan table, shared SQL area in memory, and AWR history. Dbms_xplan provides more detailed information than traditional tools like tkprof, including predicates, notes, bind values, and plan history. It requires privileges to access dictionary views for displaying plans from memory and AWR. The document also demonstrates usage examples and output formats for dbms_xplan.analyze.
The optimizer is the "brain" of the database, interpreting SQL queries and determining the fastest method of execution. In this talk by Bruce Momjian, Senior Database Architect at EnterpriseDB and co-founder of the PostgreSQL Global Development Group, uses the explain command to show how the optimizer interprets queries and determines optimal execution. The talk aimed at assisting developers and administrators in understanding how Postgres optimally executes their queries and what steps they can take to understand and perhaps improve its behavior.
The document discusses Oracle's Automatic Workload Repository (AWR) and how it can be used to analyze database performance issues. It provides an overview of AWR basics and functionality, walks through analyzing an AWR report including a real-world case study of identifying a performance regression, and discusses AWR administration and diagnostics.
The document discusses memory usage in Linux systems. It begins by describing the boot process and how the kernel loads into memory. It then explains that Linux uses virtual memory, where each process has its own virtual address space. Physical memory is limited by factors like CPU architecture and motherboard configuration. When an executable runs, not all of its code is loaded; dynamic libraries it depends on are mapped into its address space as well, significantly increasing its memory usage compared to the executable size alone.
The document discusses SQL tuning methodology. The three pillars of SQL tuning are: 1) diagnostics collection to identify problematic SQL, 2) root cause analysis to determine why SQL is performing poorly, and 3) remediation steps to address issues. It covers tools for diagnostics collection like SQL trace, AWR, and explains execution plans and the cost-based optimizer. The document provides a methodology for SQL tuning including identifying SQL, collecting data, analyzing root causes, and testing and implementing solutions.
This document discusses Oracle wait events. It explains that wait events track where Oracle is spending its time, including different types of waits like CPU time, I/O events, enqueue events and latch events. It provides examples of specific wait events like db file sequential read, direct path write, log file sync and buffer busy waits. It also gives recommendations for interpreting wait event data and resolving high wait times through methods like tuning SQL, improving I/O speeds, and reducing contention.
This document discusses key performance indicators (KPIs) for monitoring the health and resource utilization of Exadata database machines using Oracle Enterprise Manager. It defines 10 metric extensions to monitor storage servers, including metrics for I/O operations per second, throughput, response time, load, and composite metrics to evaluate overall disk health. Instructions are provided on creating these metric extensions in Enterprise Manager using SQL queries and setting initial warning and critical thresholds. The document also covers Exadata and storage server architecture and explains how to monitor components holistically using these defined KPIs and Enterprise Manager services.
This document provides a system capacity plan for Company's new architecture. It determines the server, memory, and disk capacity requirements for the production, user learning, and testing environments to support the Oracle applications and business volumes over the next 3 years. Key results found sufficient initial capacity for cutover but identified future upgrades that may be needed. It also specifies desktop client machine requirements.
This document describes a visualization technique for SQL tuning called Visual SQL Tuning (VST). VST involves drawing the tables involved in a SQL query as nodes and the joins between the tables as connecting lines to show the relationships. It also identifies any filter conditions in the WHERE clause marked on the relevant tables. This visual representation of the tables, joins, and filters can help identify the optimal execution path for the SQL query.
The document provides information on useful Linux/UNIX command line tools for Oracle DBAs to monitor and troubleshoot the underlying operating system and Oracle database. It discusses tools such as sar, sadc, sadf, mpstat, vmstat, ipc, and others that provide statistics on CPU usage, memory usage, paging activity, process activity and interprocess communication resources. For each tool, it provides examples of commands and output to understand what statistics are reported. The document is a guide to key Linux/UNIX command line performance monitoring and diagnostic utilities for Oracle DBAs.
This document introduces several concepts for tuning SQL queries, including modifying the join order, join method, and access method. It discusses using hints, statistics, and initialization parameters to influence the execution plan and cause the database to run queries more efficiently. Examples are provided for improving a sample query that joins multiple tables using different techniques.
The document describes Active Session History (ASH), a new methodology for performance tuning introduced by Oracle. ASH simplifies performance tuning by using statistical sampling to collect session state and resource usage data over time. This provides a multidimensional view of sessions, SQL, users, objects, and waits consuming resources. ASH replaces previous methods that collected complete data at infrequent intervals, obscuring problems. Its sampling approach is cheaper, faster, and provides a good representation of the workload. The document outlines how ASH data can be used to identify top resource consumers and troubleshoot issues.
Navigating the world of forex trading can be challenging, especially for beginners. To help you make an informed decision, we have comprehensively compared the best forex brokers in India for 2024. This article, reviewed by Top Forex Brokers Review, will cover featured award winners, the best forex brokers, featured offers, the best copy trading platforms, the best forex brokers for beginners, the best MetaTrader brokers, and recently updated reviews. We will focus on FP Markets, Black Bull, EightCap, IC Markets, and Octa.
Digital Marketing with a Focus on Sustainabilitysssourabhsharma
Digital Marketing best practices including influencer marketing, content creators, and omnichannel marketing for Sustainable Brands at the Sustainable Cosmetics Summit 2024 in New York
Ellen Burstyn: From Detroit Dreamer to Hollywood Legend | CIO Women MagazineCIOWomenMagazine
In this article, we will dive into the extraordinary life of Ellen Burstyn, where the curtains rise on a story that's far more attractive than any script.
The Most Inspiring Entrepreneurs to Follow in 2024.pdfthesiliconleaders
In a world where the potential of youth innovation remains vastly untouched, there emerges a guiding light in the form of Norm Goldstein, the Founder and CEO of EduNetwork Partners. His dedication to this cause has earned him recognition as a Congressional Leadership Award recipient.
Cover Story - China's Investment Leader - Dr. Alyce SUmsthrill
In World Expo 2010 Shanghai – the most visited Expo in the World History
https://www.britannica.com/event/Expo-Shanghai-2010
China’s official organizer of the Expo, CCPIT (China Council for the Promotion of International Trade https://en.ccpit.org/) has chosen Dr. Alyce Su as the Cover Person with Cover Story, in the Expo’s official magazine distributed throughout the Expo, showcasing China’s New Generation of Leaders to the World.
Starting a business is like embarking on an unpredictable adventure. It’s a journey filled with highs and lows, victories and defeats. But what if I told you that those setbacks and failures could be the very stepping stones that lead you to fortune? Let’s explore how resilience, adaptability, and strategic thinking can transform adversity into opportunity.
Zodiac Signs and Food Preferences_ What Your Sign Says About Your Tastemy Pandit
Know what your zodiac sign says about your taste in food! Explore how the 12 zodiac signs influence your culinary preferences with insights from MyPandit. Dive into astrology and flavors!
Brian Fitzsimmons on the Business Strategy and Content Flywheel of Barstool S...Neil Horowitz
On episode 272 of the Digital and Social Media Sports Podcast, Neil chatted with Brian Fitzsimmons, Director of Licensing and Business Development for Barstool Sports.
What follows is a collection of snippets from the podcast. To hear the full interview and more, check out the podcast on all podcast platforms and at www.dsmsports.net
Discover innovative uses of Revit in urban planning and design, enhancing city landscapes with advanced architectural solutions. Understand how architectural firms are using Revit to transform how processes and outcomes within urban planning and design fields look. They are supplementing work and putting in value through speed and imagination that the architects and planners are placing into composing progressive urban areas that are not only colorful but also pragmatic.
Presentation by Herman Kienhuis (Curiosity VC) on Investing in AI for ABS Alu...Herman Kienhuis
Presentation by Herman Kienhuis (Curiosity VC) on developments in AI, the venture capital investment landscape and Curiosity VC's approach to investing, at the alumni event of Amsterdam Business School (University of Amsterdam) on June 13, 2024 in Amsterdam.
Industrial Tech SW: Category Renewal and CreationChristian Dahlen
Every industrial revolution has created a new set of categories and a new set of players.
Multiple new technologies have emerged, but Samsara and C3.ai are only two companies which have gone public so far.
Manufacturing startups constitute the largest pipeline share of unicorns and IPO candidates in the SF Bay Area, and software startups dominate in Germany.
Anny Serafina Love - Letter of Recommendation by Kellen Harkins, MS.AnnySerafinaLove
This letter, written by Kellen Harkins, Course Director at Full Sail University, commends Anny Love's exemplary performance in the Video Sharing Platforms class. It highlights her dedication, willingness to challenge herself, and exceptional skills in production, editing, and marketing across various video platforms like YouTube, TikTok, and Instagram.
Best practices for project execution and deliveryCLIVE MINCHIN
A select set of project management best practices to keep your project on-track, on-cost and aligned to scope. Many firms have don't have the necessary skills, diligence, methods and oversight of their projects; this leads to slippage, higher costs and longer timeframes. Often firms have a history of projects that simply failed to move the needle. These best practices will help your firm avoid these pitfalls but they require fortitude to apply.
Storytelling is an incredibly valuable tool to share data and information. To get the most impact from stories there are a number of key ingredients. These are based on science and human nature. Using these elements in a story you can deliver information impactfully, ensure action and drive change.
Call8328958814 satta matka Kalyan result satta guessing➑➌➋➑➒➎➑➑➊➍
Satta Matka Kalyan Main Mumbai Fastest Results
Satta Matka ❋ Sattamatka ❋ New Mumbai Ratan Satta Matka ❋ Fast Matka ❋ Milan Market ❋ Kalyan Matka Results ❋ Satta Game ❋ Matka Game ❋ Satta Matka ❋ Kalyan Satta Matka ❋ Mumbai Main ❋ Online Matka Results ❋ Satta Matka Tips ❋ Milan Chart ❋ Satta Matka Boss❋ New Star Day ❋ Satta King ❋ Live Satta Matka Results ❋ Satta Matka Company ❋ Indian Matka ❋ Satta Matka 143❋ Kalyan Night Matka..
❼❷⓿❺❻❷❽❷❼❽ Dpboss Matka Result Satta Matka Guessing Satta Fix jodi Kalyan Final ank Satta Matka Dpbos Final ank Satta Matta Matka 143 Kalyan Matka Guessing Final Matka Final ank Today Matka 420 Satta Batta Satta 143 Kalyan Chart Main Bazar Chart vip Matka Guessing Dpboss 143 Guessing Kalyan night
4 Benefits of Partnering with an OnlyFans Agency for Content Creators.pdfonlyfansmanagedau
In the competitive world of content creation, standing out and maximising revenue on platforms like OnlyFans can be challenging. This is where partnering with an OnlyFans agency can make a significant difference. Here are five key benefits for content creators considering this option:
5. Before SQL
Example - mainframe Datacom/DB COBOL
List index names
Write loops
read a from one index i1 where one.c=10
while more table one rows exist get next row
read b from two index i2 where two.a = one.a
while more table two rows exist get next row
print one.a,two.b
end while
end while
6. SQL
Tell what you want, not how to get it
select one.a,two.b
from
one,two
where
one.c=10 and one.a=two.a;
7. Pre-SQL versus SQL
Pre-SQL code very efficient – runs in
megabytes – VSE mainframe COBOL
Labor intensive
SQL can be inefficient – runs in
gigabytes (if you are lucky!)
Much more productive – do in minutes
what took hours before – create tables
8. What the database doesn’t
know
Optimizer has a limited set of statistics
that describe the data
It can miscalculate the number of rows
a query will return, its cardinality
A cardinality error can lead optimizer to
choose a slow way to run the SQL
9. Example plan/Cardinality
-------------------------------------------------
| Id | Operation | Name | Rows | Cost
-------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 3
|* 1 | TABLE ACCESS FULL| TEST1 | 10 | 3
-------------------------------------------------
Plan = how Oracle will run your query
Rows = how many rows optimizer thinks that
step will return
Cost = estimate of time query will take, a
function of the number of rows
10. How to fix cardinality
problems
Find out if it really is a cardinality issue
Determine the reason it occurred
Single column
Multiple columns
Choose a strategy
Give the optimizer more information
Override optimizer decision
Change the application
11. Four examples
Four examples of how the optimizer
calculates cardinality
Full scripts and their outputs on portal,
pieces on slides – edited for simplicity
12. Step 1: Find out if it really is a
cardinality issue
Example 1
Data
select a,count(*) from test1 group by a;
A COUNT(*)
---------- ----------
1 10
Query
select * from test1 where a=1;
13. Step 1: Find out if it really is a
cardinality issue
Get estimated cardinality from plan
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 10 |
|* 1 | TABLE ACCESS FULL| TEST1 | 10 |
-------------------------------------------
Do query for actual number of rows
select count(*) from test1 where a=1;
14. Step 1: Find out if it really is a
cardinality issue
Plan is a tree – find cardinality and select
count(*) on part of query represented by that
part of plan. join
table join
table table
15. Step 2: Understand the reason
for the wrong cardinality
Unequal distribution of data:
Within a single column
Last name
“Smith” or “Jones”
Among multiple columns –
Address
Zipcode and State
16. Step 2: Understand the reason
for the wrong cardinality
Example 2 - Unequal distribution of values in a
single column
1,000,000 rows with value 1
1 row with value 2
select a,count(*) from TEST2 group by a;
A COUNT(*)
---------- ----------
1 1000000
2 1
17. Step 2: Understand the reason
for the wrong cardinality
SQL statement – returns one row
select * from TEST2 where a=2;
18. Step 2: Understand the reason
for the wrong cardinality
Plan with wrong number of rows = 500,000
Full scan instead of range scan – 100 times slower
---------------------------------------------
| Operation | Name | Rows |
---------------------------------------------
| SELECT STATEMENT | | 500K|
| INDEX FAST FULL SCAN| TEST2INDEX | 500K|
---------------------------------------------
19. Step 2: Understand the reason
for the wrong cardinality
Column statistics – two distinct values
LOW HIGH NUM_DISTINCT
---------- ---------- ------------
1 2 2
Table statistic – total # of rows – 1,000,001
NUM_ROWS
----------
1000001
20. Step 2: Understand the reason
for the wrong cardinality
Rows in plan = (rows in table)/
(distinct values of column)
500000=1000001/2
Optimizer knew that there were only
two values – assumed they had equal
number of rows
21. Step 2: Understand the reason
for the wrong cardinality
Example 3 - Combinations of column values
not equally distributed
1,000,000 rows with values 1,1
1,000,000 rows with values 2,2
1 row with value 1,2
~ Equal numbers of 1’s and 2’s in each column
A B COUNT(*)
---------- ---------- ----------
1 1 1000000
1 2 1
2 2 1000000
22. Step 2: Understand the reason
for the wrong cardinality
SQL statement – retrieves one row
select sum(a+b)
from TEST3
where
a=1 and b=2;
23. Step 2: Understand the reason
for the wrong cardinality
Plan with wrong number of rows = 500,000
Inefficient full scan
----------------------------------------------
| Operation | Name | Rows |
----------------------------------------------
| SELECT STATEMENT | | 1 |
| SORT AGGREGATE | | 1 |
| INDEX FAST FULL SCAN| TEST3INDEX | 500K|
----------------------------------------------
24. Step 2: Understand the reason
for the wrong cardinality
Column statistics
C LOW HIGH NUM_DISTINCT
- ---------- ---------- ------------
A 1 2 2
B 1 2 2
Table statistic – total # of rows – 2,000,001
NUM_ROWS
----------
2000001
25. Step 2: Understand the reason
for the wrong cardinality
Rows in plan = (rows in table)/
(distinct values A * distinct values B)
500000=2000001/(2 * 2)
Optimizer assumes all four
combinations (1,1),(1,2),(2,1),(2,2)
equally likely
26. Step 2: Understand the reason
for the wrong cardinality
How to tell which assumption is in play?
Select count(*) each column
select a,count(*) from TEST3 group by a;
select b,count(*) from TEST3 group by b;
count(*) each column combination
select a,b,count(*) from TEST3
group by a,b;
27. Step 3: Choose the best strategy
for fixing the cardinality problem
Giving the optimizer more information
Histograms
SQL Profiles
Overriding optimizer decisions
Hints
Changing the application
Try to use optimizer as much as possible to
minimize development work
28. Step 3: Choose the best strategy
for fixing the cardinality problem
Giving the optimizer more information –
using histograms
Works for unequal distribution within a
single column
A histogram records the distribution of
values within a column in up to 254
“buckets”
Works best on columns with fewer than
255 distinct values
29. Step 3: Choose the best strategy
for fixing the cardinality problem
Run gather_table_stats command to get
histograms on the column – 254 is max
number of buckets
method_opt=>'FOR ALL COLUMNS SIZE 254'
30. Step 3: Choose the best strategy
for fixing the cardinality problem
Plan for Example 2 with correct number of rows
with histogram
Uses range scan
-----------------------------------------
| Operation | Name | Rows |
-----------------------------------------
| SELECT STATEMENT | | 1 |
| INDEX RANGE SCAN| TEST2INDEX | 1 |
-----------------------------------------
31. Step 3: Choose the best strategy
for fixing the cardinality problem
Column statistics – two buckets
LOW HIGH NUM_DISTINCT NUM_BUCKETS
---------- ---------- ------------ -----------
1 2 2 2
Table statistic – unchanged
NUM_ROWS
----------
1000001
32. Step 3: Choose the best strategy
for fixing the cardinality problem
Time without histograms (1 second):
Elapsed: 00:00:01.00
Time with histograms(1/100th second):
Elapsed: 00:00:00.01
33. Step 3: Choose the best strategy
for fixing the cardinality problem
Giving the optimizer more information – using
SQL Profiles
Works for unequal distribution among multiple
columns
Includes information about the relationship
between columns in the SQL – correlated columns
or predicates
34. Step 3: Choose the best strategy
for fixing the cardinality problem
SQL Tuning Advisor gathers statistics on the
columns
...DBMS_SQLTUNE.CREATE_TUNING_TASK(...
...DBMS_SQLTUNE.EXECUTE_TUNING_TASK(...
Accept the SQL Profile it creates to use the
new statistics
...DBMS_SQLTUNE.ACCEPT_SQL_PROFILE (...
35. Step 3: Choose the best strategy
for fixing the cardinality problem
Example 3 plan with correct number of rows = 1
using SQL profile
--------------------------------------------------
| Operation | Name | Rows | Bytes |
--------------------------------------------------
| SELECT STATEMENT | | 1 | 6 |
| SORT AGGREGATE | | 1 | 6 |
| INDEX RANGE SCAN| TEST3INDEX | 1 | 6 |
-------------------------------------------------|
36. Step 3: Choose the best strategy
for fixing the cardinality problem
Time without a profile (1 second):
Elapsed: 00:00:01.09
Time with a profile(1/100th second):
Elapsed: 00:00:00.01
37. Step 3: Choose the best strategy
for fixing the cardinality problem
Overriding optimizer decisions – using hints
Example 4 has unequal distribution of column
values across two tables – histograms and SQL
Profiles don’t work
Hint forces index range scan
Small amount of additional code – not like
Cobol on mainframe
38. Step 3: Choose the best strategy
for fixing the cardinality problem
Example 4 - SMALL table
MANY relates to 1 – there are many rows with
value 1
FEW relates to 2 – there are few with value 2
insert into SMALL values ('MANY',1);
insert into SMALL values ('FEW',2);
39. Step 3: Choose the best strategy
for fixing the cardinality problem
Example 4 - LARGE table:
1,000,000 rows with value 1
1 row with value 2
NUM COUNT(*)
---------- ----------
1 1000000
2 1
40. Step 3: Choose the best strategy
for fixing the cardinality problem
SQL statement – returns one row
select B.NUM
from SMALL A,LARGE B
where
A.NUM=B.NUM and
A.NAME='FEW';
41. Step 3: Choose the best strategy
for fixing the cardinality problem
Plan with wrong number of rows = 125,000
----------------------------------------------
| Operation | Name | Rows |
----------------------------------------------
| SELECT STATEMENT | | 125K|
| HASH JOIN | | 125K|
| TABLE ACCESS FULL | SMALL | 1 |
| INDEX FAST FULL SCAN| LARGEINDEX | 1000K|
----------------------------------------------
42. Step 3: Choose the best strategy
for fixing the cardinality problem
Column statistics – two buckets on all
columns – using histograms
LOW HIGH NUM_DISTINCT NUM_BUCKETS
---------- ---------- ------------ -----------
1 2 2 2
LOW HIGH NUM_DISTINCT NUM_BUCKETS
---- ---- ------------ -----------
FEW MANY 2 2
43. Step 3: Choose the best strategy
for fixing the cardinality problem
Table statistics – SMALL has 2 rows,
LARGE 1000001
NUM_ROWS
----------
2
NUM_ROWS
----------
1000001
44. Step 3: Choose the best strategy
for fixing the cardinality problem
125000=1000001/8
Optimizer appears to assume all eight
combinations of the three columns’
values are equally likely
Can’t verify formula – references don’t
include formula with histograms
Even worse without histograms –
cardinality is 500000
45. Step 3: Choose the best strategy
for fixing the cardinality problem
No SQL profile from SQL Tuning Advisor:
There are no recommendations to improve the
statement.
Neither histograms nor SQL profiles help
example 4
46. Step 3: Choose the best strategy
for fixing the cardinality problem
Statement with hints:
Use index
Don’t do full scan
select /*+ INDEX(B LARGEINDEX)
NO_INDEX_FFS(B LARGEINDEX) */
B.NUM
from SMALL A,LARGE B
where
a.NUM=B.NUM and
A.NAME='FEW';
47. Step 3: Choose the best strategy
for fixing the cardinality problem
Time without a hint (1 second):
Elapsed: 00:00:01.03
Time with a hint (1/100th second):
Elapsed: 00:00:00.01
48. Step 3: Choose the best strategy
for fixing the cardinality problem
Changing the application
Change your tables so that the optimizer gets your
SQL’s cardinality right
Requires more work designing tables, but keeps
productivity benefits of SQL
49. Step 3: Choose the best strategy
for fixing the cardinality problem
Example 4 – moved NAME column to LARGE table and
split table in two
One million (‘MANY’,1) rows in LARGEA
One (‘FEW’,2) row in LARGEB
Query:
select NUM
from (select * from largea
union
select * from largeb)
where
NAME='FEW';
50. Step 3: Choose the best strategy
for fixing the cardinality problem
Plan is just as efficient as with hint:
Number of rows = 2 (reality is 1)
Range Scan
--------------------------------------------------------------
| Id | Operation | Name | Rows |
--------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 |
| 1 | VIEW | | 2 |
| 2 | SORT UNIQUE | | 2 |
| 3 | UNION-ALL | | |
| 4 | TABLE ACCESS BY INDEX ROWID| LARGEA | 1 |
|* 5 | INDEX RANGE SCAN | LARGEAINDEX | 1 |
| 6 | TABLE ACCESS BY INDEX ROWID| LARGEB | 1 |
|* 7 | INDEX RANGE SCAN | LARGEBINDEX | 1 |
--------------------------------------------------------------
51. Step 3: Choose the best strategy
for fixing the cardinality problem
Time without table change (1 second):
Elapsed: 00:00:01.03
Time with table change (1/100th second):
Elapsed: 00:00:00.01
52. Conclusion
SQL improves productivity, optimizer has limits
Identify cases where cardinality is wrong
Understand why the database got it wrong
One column
Multiple columns
Choose best strategy to fix
Give optimizer more info
Override optimizer’s choices
Redesign tables
53. References
Cost Based Optimizer Fundamentals, Jonathan Lewis
Metalink Note:212809.1, Limitations of the Oracle Cost Based
Optimizer
Metalink Note:68992.1, Predicate Selectivity
Histograms – Myths and Facts, Wolfgang Breitling
Select Journal, Volume 13, Number 3