Slides from OOW13
The optimizer must try to be all things to all people, and similarly, the collection of optimizer statistics must try to satisfy the needs of all. And many DBA's just leave it at that. But the optimizer offers so much more than that. With a little more effort and discipline, we can achieve much more than a "one-size-fits-all" policy, and maximize the benefit of all of the optimizer features. We'll look at the tools now available under DBMS_STATS to get more stability and better performance with optimizer statistics.
The document discusses the risks and challenges of automatically gathering statistics in an Oracle database. It notes that while gathering statistics is intended to optimize SQL performance, it can sometimes have the unintended effect of adding more expensive SQL or invalidating cached execution plans, potentially slowing performance. The recommendation is that unless database performance is known to be poor, statistics should not be changed automatically and the risks of gathering statistics outweigh the potential benefits.
This document provides an overview of statistics for database developers. It discusses key statistics concepts like cardinality estimation and how statistics are used to estimate the number of rows returned by a query. It also covers important statistics-related topics such as data skew, dynamic sampling, and extended statistics that can impact query optimization. Understanding how the optimizer uses statistics is important for helping the optimizer generate efficient execution plans.
The document describes an algorithm used to purge data from a large IBM DB2 database to reduce its size. Key steps included:
1) Exporting data from large tables to external files and reloading the tables with only valid records to remove invalid data
2) Dropping constraints and indexes from large tables to improve performance during the purge process
3) Setting integrity constraints back on tables after the purge to ensure data validity
[Pgday.Seoul 2019] Citus를 이용한 분산 데이터베이스PgDay.Seoul
This document summarizes how to set up and use Citus, an open-source PostgreSQL-based distributed database. It explains how to install Citus, add worker nodes, create distributed tables, and use features like reference tables to perform distributed queries across the cluster.
This document is a presentation on advanced Cassandra data modeling techniques. It discusses time series modeling, user modeling, using collections like sets, lists and maps, indexing strategies like keyword indexing and bitmap indexing. It encourages the audience to go beyond basic modeling and take advantage of Cassandra features to create "super" models that are fast and efficient. It promotes experimenting with different partitioning and clustering strategies. The presentation concludes by advertising an upcoming modeling competition at the Cassandra summit and sharing a discount code for attendance.
This program takes inputs like cell tower locations, DEM data, and water meter locations to calculate cell reception zones and output them as shapefiles. It performs viewshed analysis on each cell tower to create visibility polygons. These are combined and split into 4 shapefiles representing zones of bad, good, very good, and great reception. A text file is also output listing each water meter location and its assigned reception zone.
Dido Green: Expectations for Therapy and Relationship to Confidence and Compe...Beitissie1
The document summarizes a lecture given by Dido Green at the 6th International Conference on Disabilities in Israel in 2015. The lecture discussed the relationship between expectations for therapy, confidence, competence, and outcomes for children with unilateral cerebral palsy. It describes a study that used a magic-themed intensive therapy program and assessed changes in children's hope, hand function, independence, and perceptions before, during, and after the program. Interviews with the children found that their views of their condition and abilities became more positive following the therapy. The study suggests expectations and confidence may influence progress in movement skills for these children.
The struggling economy is at the forefront of conversations around the globe. In circumstances such as these, companies become too cautious and complacent.
What if you took a different approach? Imagine the influence your business can have on clients and customers if you were the only company aggresively promoting your services and products instead of being complacent. Make the slowing economy an opportunity instead of tightening your grip in fear.
The document discusses the risks and challenges of automatically gathering statistics in an Oracle database. It notes that while gathering statistics is intended to optimize SQL performance, it can sometimes have the unintended effect of adding more expensive SQL or invalidating cached execution plans, potentially slowing performance. The recommendation is that unless database performance is known to be poor, statistics should not be changed automatically and the risks of gathering statistics outweigh the potential benefits.
This document provides an overview of statistics for database developers. It discusses key statistics concepts like cardinality estimation and how statistics are used to estimate the number of rows returned by a query. It also covers important statistics-related topics such as data skew, dynamic sampling, and extended statistics that can impact query optimization. Understanding how the optimizer uses statistics is important for helping the optimizer generate efficient execution plans.
The document describes an algorithm used to purge data from a large IBM DB2 database to reduce its size. Key steps included:
1) Exporting data from large tables to external files and reloading the tables with only valid records to remove invalid data
2) Dropping constraints and indexes from large tables to improve performance during the purge process
3) Setting integrity constraints back on tables after the purge to ensure data validity
[Pgday.Seoul 2019] Citus를 이용한 분산 데이터베이스PgDay.Seoul
This document summarizes how to set up and use Citus, an open-source PostgreSQL-based distributed database. It explains how to install Citus, add worker nodes, create distributed tables, and use features like reference tables to perform distributed queries across the cluster.
This document is a presentation on advanced Cassandra data modeling techniques. It discusses time series modeling, user modeling, using collections like sets, lists and maps, indexing strategies like keyword indexing and bitmap indexing. It encourages the audience to go beyond basic modeling and take advantage of Cassandra features to create "super" models that are fast and efficient. It promotes experimenting with different partitioning and clustering strategies. The presentation concludes by advertising an upcoming modeling competition at the Cassandra summit and sharing a discount code for attendance.
This program takes inputs like cell tower locations, DEM data, and water meter locations to calculate cell reception zones and output them as shapefiles. It performs viewshed analysis on each cell tower to create visibility polygons. These are combined and split into 4 shapefiles representing zones of bad, good, very good, and great reception. A text file is also output listing each water meter location and its assigned reception zone.
Dido Green: Expectations for Therapy and Relationship to Confidence and Compe...Beitissie1
The document summarizes a lecture given by Dido Green at the 6th International Conference on Disabilities in Israel in 2015. The lecture discussed the relationship between expectations for therapy, confidence, competence, and outcomes for children with unilateral cerebral palsy. It describes a study that used a magic-themed intensive therapy program and assessed changes in children's hope, hand function, independence, and perceptions before, during, and after the program. Interviews with the children found that their views of their condition and abilities became more positive following the therapy. The study suggests expectations and confidence may influence progress in movement skills for these children.
The struggling economy is at the forefront of conversations around the globe. In circumstances such as these, companies become too cautious and complacent.
What if you took a different approach? Imagine the influence your business can have on clients and customers if you were the only company aggresively promoting your services and products instead of being complacent. Make the slowing economy an opportunity instead of tightening your grip in fear.
This document provides a summary of techniques for tuning SQL and database performance. It begins by discussing the importance of properly understanding the user requirement before focusing on SQL tuning. Various SQL tuning techniques are then demonstrated, such as adding indexes, partitioning, and materialized views. The document emphasizes the need to avoid side effects from tuning and stresses tuning the overall user experience rather than just the SQL. Diagnostic techniques like ASH and wait events are also covered.
IMTC Miami Conference - Social Media and Remittances - Michael KentMichael Kent
This document discusses how social media can be leveraged for remittances and cross-border consumer payments. It notes that social networks like Facebook have over 1.1 billion users globally, including large populations in both major sending and receiving countries for remittances. The document outlines several ways social media could be used, including for marketing and customer acquisition, know-your-customer compliance, an inbound communication channel for customer support, beneficiary engagement, and providing proof and reassurance to customers. The goal is to make international money transfers more convenient, low-cost, fast, and efficient by taking advantage of existing social networks.
The document provides an introduction to barrier free design and standards for accessibility. It discusses the needs of people with various disabilities including hearing, visual, and mobility impairments. Design requirements are outlined to ensure accessibility for the deaf or hard of hearing through visual signage, and for those with limited or no vision through tactile guidance blocks and braille. Standards are also described for wheelchair users and those with semi-ambulatory disabilities, focusing on clear widths, ramp slopes, handrails, and transfer spaces.
Soaring through the Clouds - Oracle Fusion Middleware Partner Forum 2016 Lucas Jellema
The Oracle ACE team has a new mission: complete a complex end-to-end business flow across at least ten Oracle PaaS Services – in front of a live audience. This session will demonstrate how a document driven human workflow triggers an integration flow to update a 3rd party application that in turn emits events that are processed in real time resulting in findings that are published through a REST API in a user friendly front end. Expect guest appearances by an interesting Oracle PaaS cast, including Doc CS, PCS, OSN, Sites CS and ICS and also featuring DBaaS, JCS and SOA CS, Application Container Cloud with a touch of MCS and IoT CS and finally a JET [app] cruising through the clouds. Our flight plan depends a little bit on the weather forecast: we do need a cloudy sky to realize our full potential. The team will perform some live hacking in the various cloud services to complete and tweak the end-to-end flow. We will divulge some of the behind-the-scenes challenges and our findings beyond slideware and C-level promises. A very special guest star will be participating in this session – demonstrating an important attraction of cloud based development.
Oracle OpenWorld 2016 Review - High Level Overview of major themes and grand ...Lucas Jellema
Overview of the highlights, main themes and grand announcements during Oracle OpenWorld 2016. Cloud, Big Data, Machine Learning, Infrastructure, raging against AWS and the Oracle future strategy are the chief topics.
Slides from Openworl 2019. A look at how to safely (and unsafely) kill sessions in the Oracle database, and how to perhaps avoid killing them altogether.
This document discusses several features introduced in Oracle Database 12c and 18c that improve the handling of SQL and PL/SQL code. It covers longer identifier names, compile-time resolvable expression sizes, improved overflow handling for listagg, column-level collation, and deprecating code. Examples are provided to demonstrate the usage and benefits of each feature.
APEX tour 2019 - successful development with autonomousConnor McDonald
The autonomous database offers insane levels of performance, but you won't be able to attain that if you are not constructing your SQL statements in a way that is scalable...and more importantly, secure from hacking
Apologies for most pics missing and awful layout...you can thank slideshare for that :-(
Latin America Tour 2019 - 10 great sql featuresConnor McDonald
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
OG Yatra - upgrading to the new 12c+ optimizerConnor McDonald
The 12c optimizer has a vast array of improvements, but of course, functionality changes means that your SQL plans might also change when you upgrade. This slidedeck covers what has changed, and how to ensure better more stable performance when you upgrade.
This presentation discusses several new features in Oracle Database 12c including:
1) Total Recall which allows querying historical data as of a past timestamp using Flashback Archive.
2) Context extension which captures additional context like user, client, and IP address with redo data in Flashback Archive.
3) TRUNCATE TABLE now supports cascading deletes to dependent child tables when referenced keys have ON DELETE CASCADE constraints.
This document discusses features of Oracle Database 12c related to auditing and tracking changes over time. It summarizes that Oracle 12c includes flashback data archive, which allows viewing or restoring data to a previous state. This feature can be used for auditing and tracking changes made to database tables. The document also discusses how Oracle 12c captures additional context metadata with each change, including user, host, and program used, allowing more detailed tracking of changes than prior releases.
Slides from OpenWorld. Flashback has been around for long time yet people assume it should entirely within the realm of the DBA. But with modern development techniques such as continuous integration/continuous deployment, flashback actually is a perfect for *developers*
Oracle 12c Automatic Data Optimization (ADO) - ILMMonowar Mukul
Automatic Data Optimization (ADO) automatically moves and compresses data according to user-defined policies based on statistics collected by Heat Map. Heat Map tracks data access patterns at the row and segment levels. ADO policies can be defined to compress or move segments after a specified number of days with no modifications. When testing compression policies, ADO automatically compressed the SALES_ADO table after 20 days of no modifications, as determined by simulated Heat Map statistics.
SQL Macros - Game Changing Feature for SQL Developers?Andrej Pashchenko
SQL Macros are functions that return SQL statements as text. When called in a SQL statement, the returned SQL text is parsed and optimized rather than executing the function at runtime. This avoids context switches to PL/SQL and allows the optimizer to see the full SQL. Table SQL macros can be called in the FROM clause and act like views or inline queries, except they allow parameters to make the views polymorphic. Scalar parameters in the returned SQL text are substituted like bind variables to make the macros more reusable and flexible.
This document summarizes Alex Fatkulin's experience running GoldenGate on Exadata. It discusses general configuration considerations like using DBFS for trail files and parameter files. It provides tips for optimizing the Manager, Extract, DataPump, and Replicat components, including redo access options, bounded recovery, compressed tables, and transient primary key updates. It also covers DBFS performance considerations related to GoldenGate's I/O profile.
This document discusses the evolution of user-defined functions (UDFs) in Oracle SQL over multiple Oracle database versions. It shows how UDFs started as PL/SQL functions callable from SQL in earlier versions, which could impact performance. It then demonstrates how newer Oracle database versions allow defining UDFs directly in SQL for improved performance and maintainability when using functions in SQL statements and queries. The document provides examples of different ways to implement and call UDFs across various Oracle versions.
Dbms plan - A swiss army knife for performance engineersRiyaj Shamsudeen
This document discusses dbms_xplan, a tool for performance engineers to analyze execution plans. It provides options for displaying plans from the plan table, shared SQL area in memory, and AWR history. Dbms_xplan provides more detailed information than traditional tools like tkprof, including predicates, notes, bind values, and plan history. It requires privileges to access dictionary views for displaying plans from memory and AWR. The document also demonstrates usage examples and output formats for dbms_xplan.analyze.
This document provides a summary of techniques for tuning SQL and database performance. It begins by discussing the importance of properly understanding the user requirement before focusing on SQL tuning. Various SQL tuning techniques are then demonstrated, such as adding indexes, partitioning, and materialized views. The document emphasizes the need to avoid side effects from tuning and stresses tuning the overall user experience rather than just the SQL. Diagnostic techniques like ASH and wait events are also covered.
IMTC Miami Conference - Social Media and Remittances - Michael KentMichael Kent
This document discusses how social media can be leveraged for remittances and cross-border consumer payments. It notes that social networks like Facebook have over 1.1 billion users globally, including large populations in both major sending and receiving countries for remittances. The document outlines several ways social media could be used, including for marketing and customer acquisition, know-your-customer compliance, an inbound communication channel for customer support, beneficiary engagement, and providing proof and reassurance to customers. The goal is to make international money transfers more convenient, low-cost, fast, and efficient by taking advantage of existing social networks.
The document provides an introduction to barrier free design and standards for accessibility. It discusses the needs of people with various disabilities including hearing, visual, and mobility impairments. Design requirements are outlined to ensure accessibility for the deaf or hard of hearing through visual signage, and for those with limited or no vision through tactile guidance blocks and braille. Standards are also described for wheelchair users and those with semi-ambulatory disabilities, focusing on clear widths, ramp slopes, handrails, and transfer spaces.
Soaring through the Clouds - Oracle Fusion Middleware Partner Forum 2016 Lucas Jellema
The Oracle ACE team has a new mission: complete a complex end-to-end business flow across at least ten Oracle PaaS Services – in front of a live audience. This session will demonstrate how a document driven human workflow triggers an integration flow to update a 3rd party application that in turn emits events that are processed in real time resulting in findings that are published through a REST API in a user friendly front end. Expect guest appearances by an interesting Oracle PaaS cast, including Doc CS, PCS, OSN, Sites CS and ICS and also featuring DBaaS, JCS and SOA CS, Application Container Cloud with a touch of MCS and IoT CS and finally a JET [app] cruising through the clouds. Our flight plan depends a little bit on the weather forecast: we do need a cloudy sky to realize our full potential. The team will perform some live hacking in the various cloud services to complete and tweak the end-to-end flow. We will divulge some of the behind-the-scenes challenges and our findings beyond slideware and C-level promises. A very special guest star will be participating in this session – demonstrating an important attraction of cloud based development.
Oracle OpenWorld 2016 Review - High Level Overview of major themes and grand ...Lucas Jellema
Overview of the highlights, main themes and grand announcements during Oracle OpenWorld 2016. Cloud, Big Data, Machine Learning, Infrastructure, raging against AWS and the Oracle future strategy are the chief topics.
Slides from Openworl 2019. A look at how to safely (and unsafely) kill sessions in the Oracle database, and how to perhaps avoid killing them altogether.
This document discusses several features introduced in Oracle Database 12c and 18c that improve the handling of SQL and PL/SQL code. It covers longer identifier names, compile-time resolvable expression sizes, improved overflow handling for listagg, column-level collation, and deprecating code. Examples are provided to demonstrate the usage and benefits of each feature.
APEX tour 2019 - successful development with autonomousConnor McDonald
The autonomous database offers insane levels of performance, but you won't be able to attain that if you are not constructing your SQL statements in a way that is scalable...and more importantly, secure from hacking
Apologies for most pics missing and awful layout...you can thank slideshare for that :-(
Latin America Tour 2019 - 10 great sql featuresConnor McDonald
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
OG Yatra - upgrading to the new 12c+ optimizerConnor McDonald
The 12c optimizer has a vast array of improvements, but of course, functionality changes means that your SQL plans might also change when you upgrade. This slidedeck covers what has changed, and how to ensure better more stable performance when you upgrade.
This presentation discusses several new features in Oracle Database 12c including:
1) Total Recall which allows querying historical data as of a past timestamp using Flashback Archive.
2) Context extension which captures additional context like user, client, and IP address with redo data in Flashback Archive.
3) TRUNCATE TABLE now supports cascading deletes to dependent child tables when referenced keys have ON DELETE CASCADE constraints.
This document discusses features of Oracle Database 12c related to auditing and tracking changes over time. It summarizes that Oracle 12c includes flashback data archive, which allows viewing or restoring data to a previous state. This feature can be used for auditing and tracking changes made to database tables. The document also discusses how Oracle 12c captures additional context metadata with each change, including user, host, and program used, allowing more detailed tracking of changes than prior releases.
Slides from OpenWorld. Flashback has been around for long time yet people assume it should entirely within the realm of the DBA. But with modern development techniques such as continuous integration/continuous deployment, flashback actually is a perfect for *developers*
Oracle 12c Automatic Data Optimization (ADO) - ILMMonowar Mukul
Automatic Data Optimization (ADO) automatically moves and compresses data according to user-defined policies based on statistics collected by Heat Map. Heat Map tracks data access patterns at the row and segment levels. ADO policies can be defined to compress or move segments after a specified number of days with no modifications. When testing compression policies, ADO automatically compressed the SALES_ADO table after 20 days of no modifications, as determined by simulated Heat Map statistics.
SQL Macros - Game Changing Feature for SQL Developers?Andrej Pashchenko
SQL Macros are functions that return SQL statements as text. When called in a SQL statement, the returned SQL text is parsed and optimized rather than executing the function at runtime. This avoids context switches to PL/SQL and allows the optimizer to see the full SQL. Table SQL macros can be called in the FROM clause and act like views or inline queries, except they allow parameters to make the views polymorphic. Scalar parameters in the returned SQL text are substituted like bind variables to make the macros more reusable and flexible.
This document summarizes Alex Fatkulin's experience running GoldenGate on Exadata. It discusses general configuration considerations like using DBFS for trail files and parameter files. It provides tips for optimizing the Manager, Extract, DataPump, and Replicat components, including redo access options, bounded recovery, compressed tables, and transient primary key updates. It also covers DBFS performance considerations related to GoldenGate's I/O profile.
This document discusses the evolution of user-defined functions (UDFs) in Oracle SQL over multiple Oracle database versions. It shows how UDFs started as PL/SQL functions callable from SQL in earlier versions, which could impact performance. It then demonstrates how newer Oracle database versions allow defining UDFs directly in SQL for improved performance and maintainability when using functions in SQL statements and queries. The document provides examples of different ways to implement and call UDFs across various Oracle versions.
Dbms plan - A swiss army knife for performance engineersRiyaj Shamsudeen
This document discusses dbms_xplan, a tool for performance engineers to analyze execution plans. It provides options for displaying plans from the plan table, shared SQL area in memory, and AWR history. Dbms_xplan provides more detailed information than traditional tools like tkprof, including predicates, notes, bind values, and plan history. It requires privileges to access dictionary views for displaying plans from memory and AWR. The document also demonstrates usage examples and output formats for dbms_xplan.analyze.
The document discusses data modeling techniques for Cassandra and provides examples for four use cases: shopping cart data, user activity tracking, log collection/aggregation, and user form versioning. For each use case, it describes the business needs, issues with a relational database approach, and proposes a Cassandra data model using CQL. It emphasizes the importance of proper data modeling and getting the model right for a given use case.
The document describes several databases related to banking, insurance, orders, students, and books. It includes the structure of each database with table definitions and sample data. Various SQL queries are demonstrated to retrieve, update, insert and delete records in the tables to solve business problems for each database application.
Most important "trick" of performance instrumentationCary Millsap
This is the material from my 10-minute TED-style talk 2014-09-29 at OakTable World held in conjunction with Oracle OpenWorld 2014 in San Francisco. It explains the importance of assigning a unique id to the Oracle Database code path associated with each performance experience that users can have with your system
This document discusses views in Oracle databases. It defines a view as a virtual table derived from one or more underlying base tables or other views. The document covers how to create simple and complex views, modify view definitions, perform DML operations on views, and remove views. Key points include that views allow restricting access, simplifying queries, and presenting different perspectives of the same data without affecting the base tables.
The document summarizes how SQL Plan Directives in Oracle 12c can help address issues caused by cardinality misestimation in the optimizer. It provides an example where the optimizer underestimates the number of rows returned by a query on a table due to not having statistics on correlated columns. In 12c, a SQL Plan Directive is automatically generated after the first execution to capture this misestimation. On subsequent queries, the directive can be used to provide more accurate cardinality estimates through automatic reoptimization or dynamic sampling.
Slides from the ITOUG events in Rome and Milan 2020.
Most people think of the Flashback features in Oracle as the "In Case of Emergency" switch, to only be used when some catastrophe has occurred on your database. And while it is true that Flashback will definitely help you 3 seconds after you press the Commit button and you realise that you probably needed to have a WHERE clause on that "delete all rows from the SALES table" SQL statement. Or for when you run "drop table" on the Production database, when you were just so sure that you were logged onto the Test system. But Flashback is not only for those "Oh No!" moments. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
Another year goes by, and most likely, another data access framework has been invented. It will claim to be the fastest, smartest way to talk to the database, and just like all those that came before it, it will not be. Because the best database access tool has been there for more than 30 years now, and that is PL/SQL. Although we all sometimes fall prey to the mindset of “Oh look, a shiny new tool, we should start using it," the performance and simplicity of PL/SQL remain unmatched. This session looks at the failings of other data access languages, why even a cursory knowledge of PL/SQL will make you a better developer, and how to get the most out of PL/SQL when it comes to database performance.
Analytic SQL functions, or "window functions have been there since 8.1.6, but they are still dramatically underused by application developers. This session looks at the syntax and usage of analytic functions, and how they can supercharge your SQL skillset.
Covers analytics from their inception in 8.1.6 all the through to enhancements in 18 and 19
Sangam 19 - Successful Applications on AutonomousConnor McDonald
The autonomous database offers insane levels of performance, but you won't be able to attain that if you are not constructing your SQL statements in a way that is scalable...and more importantly, secure from hacking
The document discusses various ways to concatenate or aggregate column values in Oracle databases. Older methods like XMLAGG, CONNECT BY, and custom aggregate functions are compared to the simpler LISTAGG function available in Oracle 11g and higher. Upgrading to newer database versions brings improved developer productivity through easier string aggregation queries.
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
Slides from the APAC Groundbreakers Tour from Perth and Melbourne legs. This session covered the features in 18c, 19c and 20c, along with the new free database offerings from Oracle from OpenWorld 2019
Slides from the OpenWorld talk on read consistency. It is the feature that makes Oracle such a great database for performance and concurrency. But if misunderstood, it can lead to confusion for developers
Slides from OpenWorld 2019. Want to make sure your applications are slow, burn lots of CPU, and are easily broken into by hackers? Well...in reality, if you know how to do this, then you'll know how to avoid it.
Flashback is not only for those "Oh No!" moments when we make a mistake. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
The document discusses pattern matching and summarizing employee data by department. It provides examples of using SQL to concatenate employee names grouped by department, including older techniques using MODEL clause, CONNECT BY, and XMLTRANSFORM, as well as newer techniques using LISTAGG. It also discusses challenges in summarizing data and provides an example of analyzing customer transaction data to identify customers meeting growth criteria over single and multiple days.
Latin America Tour 2019 - slow data and sql processingConnor McDonald
The document discusses techniques for improving SQL performance by reducing parsing overhead. It describes how the library cache can store the results of previous SQL parses to avoid reparsing identical or similar statements. Binding SQL statements with placeholders avoids unnecessary reparsing when statements differ only by literal values. The document emphasizes that binding user input values is critical for security to prevent SQL injection attacks.
This document discusses various SQL join queries using the EMP and DEPT tables in the Oracle database. It provides examples of inner joins, outer joins, natural joins, cross joins, and lateral joins. It explores different join types and syntax as well as filtering criteria and partitioning.
The skill set of a database practitioner is much more than what is read in the documentation, on blogs, or on StackOverflow. It is the knowledge from years of trial and error, experimentation, and sometimes painful failures. The problem is it takes time—a long, long time—to build that experience. This session aims to fast-track that path. Get a collection of hints, tips, features, and techniques picked up from the smartest people in the community.
OG Yatra - Flashback, not just for developersConnor McDonald
Flashback is not only for those "Oh No!" moments when we make a mistake. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
Kscope19 - Flashback: Good for Developers as well as DBAsConnor McDonald
Flashback is not only for those "Oh No!" moments when we make a mistake. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
Kscope19 - Understanding the basics of SQL processingConnor McDonald
Better data access typically means understanding how SQL is processed by the database, and who has time for that? Let's peel back the covers to show how SQL is processed, how to avoid getting hacked, and how to get data back to your application in a snappy fashion.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
18. 9/26/2013
18
times have changed ...
35
36
SQL> desc DBMS_STATS
FUNCTION CLOB_TO_VARRAY
FUNCTION CONV_RAW
FUNCTION CREATE_EXTENDED_STATS
FUNCTION DIFF_TABLE_STATS_IN_HISTORY
FUNCTION DIFF_TABLE_STATS_IN_PENDING
FUNCTION DIFF_TABLE_STATS_IN_STATTAB
FUNCTION GET_COMPATIBLE
FUNCTION GET_PARAM
FUNCTION GET_PREFS
FUNCTION GET_STATS_HISTORY_AVAILABILITY
FUNCTION GET_STATS_HISTORY_RETENTION
FUNCTION GET_STAT_TAB_VERSION
FUNCTION REPORT_COL_USAGE
FUNCTION REPORT_GATHER_AUTO_STATS
FUNCTION REPORT_GATHER_DATABASE_STATS
FUNCTION REPORT_GATHER_DICTIONARY_STATS
26. 9/26/2013
26
... smarter than you
51
52
SQL> select count(e.hiredate)
2 from DEPT d, EMP e
3 where e.deptno = d.deptno(+)
4 and e.sal > 10;
nested loop outer
sort merge outer
hash hash anti
nested loop anti
27. 9/26/2013
27
53
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
|* 2 | TABLE ACCESS FULL| EMP | 14 |
-------------------------------------------
no DEPT ?
54
SQL> select count(e.hiredate)
2 from DEPT d, EMP e
3 where e.deptno = d.deptno(+)
4 and e.sal > 10;
not null
foreign key
key preserved
32. 9/26/2013
32
63
SQL> select client_name, status
2 from DBA_AUTOTASK_CLIENT;
CLIENT_NAME STATUS
------------------------------------ --------
auto optimizer stats collection ENABLED
auto space advisor ENABLED
sql tuning advisor ENABLED
most sites
64
stats every night
default options
61. 9/26/2013
61
121
select *
from PEOPLE
insert into PEOPLE
select * from ...
select ...
from PEOPLE,
DEPT
where ...
delete from T
where X in
( select PID
from PEOPLE )
declare
v people.name%type;
begin
...
SQL> begin
2 dbms_stats.gather_table_stats(
3 'DEMO',
4 'PEOPLE);
5 end;
6 /
122
62. 9/26/2013
62
oracle 9
123
124
SQL> desc DBMS_STATS
PROCEDURE GATHER_TABLE_STATS
Argument Name Type In/Out Default?
----------------------- -------------- ------ --------
OWNNAME VARCHAR2 IN
TABNAME VARCHAR2 IN
PARTNAME VARCHAR2 IN DEFAULT
ESTIMATE_PERCENT NUMBER IN DEFAULT
BLOCK_SAMPLE BOOLEAN IN DEFAULT
METHOD_OPT VARCHAR2 IN DEFAULT
DEGREE NUMBER IN DEFAULT
GRANULARITY VARCHAR2 IN DEFAULT
CASCADE BOOLEAN IN DEFAULT
STATTAB VARCHAR2 IN DEFAULT
STATID VARCHAR2 IN DEFAULT
STATOWN VARCHAR2 IN DEFAULT
NO_INVALIDATE BOOLEAN IN DEFAULT
STATTYPE VARCHAR2 IN DEFAULT
FORCE BOOLEAN IN DEFAULT
NO_INVALIDATE BOOLEAN IN DEFAULT
69. 9/26/2013
69
137
unless things are bad...
do not change statistics
138
Inflammatory
statements
which will
alienate the
audience
Presentation Duration
1
2
120. 9/26/2013
120
239
some real data
239
240
SQL> desc VEHICLE
Name Null? Type
-------------------------- -------- -------------
ID NUMBER
MAKE VARCHAR2(12)
MODEL VARCHAR2(12)
...
SQL> select count(*)
2 from VEHICLE;
COUNT(*)
------------
4,770,662
240
121. 9/26/2013
121
241
default stats not enough
241
242
SQL> select count(*)
2 from VEHICLE
3 where MAKE = 'TOYOTA';
COUNT(*)
----------
608822
------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 18|
| 1 | SORT AGGREGATE | | 1 | 7 | |
|* 2 | INDEX RANGE SCAN| MAKE_IX | 6325 | 44275 | 18|
------------------------------------------------------------
242
148. 9/26/2013
148
see it !
295
SQL> select sql_id,
2 child_number,
3 is_reoptimizable,
4 is_shareable
2 from v$sql s
3 where sql_text like 'select ...';
SQL_ID CHILD_NUMBER I I
------------- ------------ - -
68vghuy0cr298 0 Y N
68vghuy0cr298 1 N Y
296
149. 9/26/2013
149
hangs around
297
SQL> exec dbms_spd.flush_sql_plan_directive;
PL/SQL procedure successfully completed.
SQL> select d.type, d.reason, o.object_name
2 from dba_sql_plan_directives d,
3 dba_sql_plan_dir_objects o
4 where o.directive_id = d.directive_id
5 and o.owner = 'SCOTT';
TYPE REASON OBJ
---------------- ------------------------------------ -----
DYNAMIC_SAMPLING SINGLE TABLE CARDINALITY MISESTIMATE DEPT
298
167. 9/26/2013
167
333
SQL> desc DBA_TABLES
Name Null? Type
----------------------------- -------- -------------
OWNER NOT NULL VARCHAR2(30)
TABLE_NAME NOT NULL VARCHAR2(30)
COLUMN_NAME NOT NULL VARCHAR2(30)
...
NUM_ROWS NUMBER
334
SQL> desc DBA_TAB_COLS
Name Null? Type
----------------------------- -------- -------------
OWNER NOT NULL VARCHAR2(30)
TABLE_NAME NOT NULL VARCHAR2(30)
COLUMN_NAME NOT NULL VARCHAR2(30)
...
NUM_DISTINCT NUMBER
LOW_VALUE RAW(32)
HIGH_VALUE RAW(32)
...
168. 9/26/2013
168
335
SQL> desc PEOPLE
Name Null? Type
----------------------------- -------- -------------
PID NUMBER
GENDER CHAR(1)
NAME VARCHAR2(47)
AGE NUMBER
336
SQL> alter session set sql_true = true;
Session altered.
SQL> begin
2 dbms_stats.gather_table_stats(
3 'DEMO',
4 'PEOPLE);
5 end;
6 /
177. 9/26/2013
177
353
96 127
354
SQL> set serverout on
SQL> declare
2 type t_bucket is table of varchar2(1);
3 l_synopsis t_bucket;
4 l_splits number := 0;
5 l_hash int;
6 l_min_val int := 0;
7 l_synopsis_size int := 16;
8 begin
9 for i in ( select single_char from one_pass ) loop
10 l_hash := ascii(i.single_char);
11
12 if l_synopsis.count = l_synopsis_size then
13 l_min_val :=
14 case
15 when l_min_val = 0 then 64
16 when l_min_val = 64 then 96
17 when l_min_val = 96 then 112
18 when l_min_val = 112 then 120
19 end;
20 l_splits := l_splits + 1;
178. 9/26/2013
178
355
22
23 for j in 1 .. l_min_val loop
24 if l_synopsis.exists(j) then
25 l_synopsis.delete(j);
26 end if;
27 end loop;
28 end if;
29
30 if l_hash > l_min_val then
31 l_synopsis(l_hash) := 'Y';
32 end if;
33 end loop;
34 dbms_output.put_line(l_synopsis.count *
35 power(2,l_splits));
36 end;
37 /
Splitting, keeping entries above 64
Splitting, keeping entries above 96
Splitting, keeping entries above 112
88
356
the reality
16384 bucket limit
18,446,744,073,709,551,616 hash range
195. 9/26/2013
195
389
SQL> desc SYS.COL_USAGE$
Name Null? Type
----------------------- -------- ---------
OBJ# NUMBER
INTCOL# NUMBER
EQUALITY_PREDS NUMBER
EQUIJOIN_PREDS NUMBER
NONEQUIJOIN_PREDS NUMBER
RANGE_PREDS NUMBER
LIKE_PREDS NUMBER
NULL_PREDS NUMBER
TIMESTAMP DATE
390
Oracle doesn’t know ...
202. 9/26/2013
202
403
boudary conditions
404
SQL> create table T (
2 skew varchar2(10),
3 even number);
Table created.
SQL> insert into T
2 select
3 case
4 when rownum > 99995 then 'SPECIAL'
5 else dbms_random.string('U',8)
6 end,
7 mod(rownum,200)
8 from dual
9 connect by level <= 100000
10 /
100000 rows created.
5 special
values
even
distribution
203. 9/26/2013
203
405
SQL> exec dbms_stats.gather_table_stats(
user,'T', estimate_percent=>null);
PL/SQL procedure successfully completed.
SQL> select COLUMN_NAME,NUM_DISTINCT,DENSITY,
2 ( select count(*)
3 from user_tab_histograms
4 where table_name = 'T'
5 and column_name = c.column_name )
6 from user_tab_cols c
7 where table_name = 'T'
8 order by table_name,COLUMN_ID
9 /
COLUMN_NAME NUM_DISTINCT DENSITY HIST_CNT
------------------- ------------ ---------- ----------
SKEW 99996 .00001 2
EVEN 200 .005 2
406
SQL> select * from T where skew = 'SPECIAL';
SKEW VAL
---------- ----------
SPECIAL 196
SPECIAL 197
SPECIAL 198
SPECIAL 199
SPECIAL 0
5 rows selected.
SQL> select * from T where even = 5;
TAG VAL
---------- ----------
IBRXGVIE 5
[snip]
500 rows selected.
204. 9/26/2013
204
407
SQL> exec dbms_stats.gather_table_stats(
user,'T', estimate_percent=>null);
PL/SQL procedure successfully completed.
SQL> select COLUMN_NAME,NUM_DISTINCT,DENSITY,
2 ( select count(*)
3 from user_tab_histograms
4 where table_name = 'T'
5 and column_name = c.column_name )
6 from user_tab_cols c
7 where table_name = 'T'
8 order by table_name,COLUMN_ID
9 /
COLUMN_NAME NUM_DISTINCT DENSITY HIST_CNT
------------------- ------------ ---------- ----------
SKEW 99996 .00001 2
EVEN 200 .000005 200
408
recommendation #5
213. 9/26/2013
213
425
analyze ... validate structure
426
SQL> desc INDEX_STATS
Name Null? Type
----------------------------- -------- --------------
HEIGHT NUMBER
BLOCKS NUMBER
NAME VARCHAR2(30)
PARTITION_NAME VARCHAR2(30)
...
OPT_CMPR_COUNT NUMBER
OPT_CMPR_PCTSAVE NUMBER
214. 9/26/2013
214
427
wrap up
428
don't collect stats .... unless
don't collect system stats ... unless
don't collect histograms ... unless
default estimate size (NDV)
lie and cheat