This document discusses five things about SQL and PL/SQL that one may not be aware of. It begins with an agenda that lists five topics: 1) The optimizer is learning from its mistakes; 2) Functions used without using a function; 3) PL/SQL warned you; 4) Location, location, location; 5) The most underutilized really cool feature from five years ago. It then proceeds to provide examples and explanations for each topic.
This paper describes the evolution of the Plan table and DBMSX_PLAN in 11g and some of the features that can be used to troubelshoot SQL performance effectively and efficiently.
These are the slides I used to present "MySQL Performance Schema" at FOSSASIA, 2015 Singapore. It gives an overview of Performance Schema and also explains how it could be used to diagnose issues using few use cases.
This paper describes the evolution of the Plan table and DBMSX_PLAN in 11g and some of the features that can be used to troubelshoot SQL performance effectively and efficiently.
These are the slides I used to present "MySQL Performance Schema" at FOSSASIA, 2015 Singapore. It gives an overview of Performance Schema and also explains how it could be used to diagnose issues using few use cases.
Wellington APAC Groundbreakers tour - Upgrading to the 12c OptimizerConnor McDonald
The 12c optimizer has a vast array of improvements, but of course, functionality changes means that your SQL plans might also change when you upgrade. This slidedeck covers what has changed, and how to ensure better more stable performance when you upgrade.
SQL tuning An execution plan is the output of the optimizer and is presented to the execution engine for
implementation. It instructs the execution engine about the operations that it must perform for
most efficiently retrieving the data required by a query.
The EXPLAIN PLAN statement gathers execution plans chosen by the Oracle optimizer for
the SELECT, UPDATE, INSERT, and DELETE statements. The steps of the execution plan are
not performed in the order in which they are numbered. There is a parent-child relationship
between steps. The row source tree is the core of the execution plan. It shows the following
information: • An ordering of the tables referenced by the statement
• An access method for each table mentioned in the statement
• A join method for tables affected by join operations in the statement
• Data operations, such as filter, sort, or aggregation
In addition to the row source tree (or data flow tree for parallel operations), the plan table
contains information about the following:
• Optimization, such as the cost and cardinality of each operation
• Partitioning, such as the set of accessed partitions
• Parallel execution, such as the distribution method of join inputs
The EXPLAIN PLAN results help you determine whether the optimizer selects a particular When you tune a SQL statement in an OLTP environment, the goal is to drive from the table
that has the most selective filter. This means that there are fewer rows passed to the next
step. If the next step is a join, this means fewer rows are joined. Check to see whether the
access paths are optimal. When you examine the optimizer execution plan, check to confirm
the following:
• The plan is such that the driving table has the best filter.
• The join order in each step means that the fewest number of rows are returned to the
next step (that is, the join order should reflect going to the best not-yet-used filters).
• The join method is appropriate for the number of rows being returned. For example,
nested loops joins through indexes may not be optimal when many rows are returned.
• Views are used efficiently. Look at the SELECT list to see whether access to the view is
necessary.
• There are any unintentional Cartesian products (even with small tables).
• Each table is being accessed efficiently. Consider the predicates in the SQL statement and the number of rows in the table. Look for suspicious activity, such as a full table
scan on tables with large number of rows, which have predicates in the WHERE clause.
Also, a full table scan might be more efficient on a small table, or to leverage a better
join method (for example, hash join) for the number of rows returned.
If any of these conditions are not optimal, consider restructuring the SQL statement or the
indexes available on the tables. There are many ways to retrieve execution plans inside the database. The most well known
ones are listed in the slide:
• The EXPLAIN PLAN command enables you to view the execution plan that the
opt
In this first of a series of presentations, we'll overview the differences between SQL and PL/SQL, and the first steps in optimization, as understanding RULE vs. COST, and how to slash 90% response time in data extractions running in SQL*Plus.
Slides from the Oracle ANZ workshop held in Sydney and Melbourne. We look at the killer features that will make 18c and 19c great productivity upgrades for DBAs
Trace File Analyzer - Usage and Features Sandesh Rao
TFA or Trace File Analyzer is an Oracle tool to collect , analyze and perform machine learning on Oracle log and other data sources. This presentation covers all the options which are available with the tool and is updated to the 18.3.0 version. This is a comprehensive deck for someone who also wants to acquaint themselves with TFA
Slides from the ITOUG events in Rome and Milan 2020.
Most people think of the Flashback features in Oracle as the "In Case of Emergency" switch, to only be used when some catastrophe has occurred on your database. And while it is true that Flashback will definitely help you 3 seconds after you press the Commit button and you realise that you probably needed to have a WHERE clause on that "delete all rows from the SALES table" SQL statement. Or for when you run "drop table" on the Production database, when you were just so sure that you were logged onto the Test system. But Flashback is not only for those "Oh No!" moments. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
Another year goes by, and most likely, another data access framework has been invented. It will claim to be the fastest, smartest way to talk to the database, and just like all those that came before it, it will not be. Because the best database access tool has been there for more than 30 years now, and that is PL/SQL. Although we all sometimes fall prey to the mindset of “Oh look, a shiny new tool, we should start using it," the performance and simplicity of PL/SQL remain unmatched. This session looks at the failings of other data access languages, why even a cursory knowledge of PL/SQL will make you a better developer, and how to get the most out of PL/SQL when it comes to database performance.
More Related Content
Similar to Five more things about Oracle SQL and PLSQL
Wellington APAC Groundbreakers tour - Upgrading to the 12c OptimizerConnor McDonald
The 12c optimizer has a vast array of improvements, but of course, functionality changes means that your SQL plans might also change when you upgrade. This slidedeck covers what has changed, and how to ensure better more stable performance when you upgrade.
SQL tuning An execution plan is the output of the optimizer and is presented to the execution engine for
implementation. It instructs the execution engine about the operations that it must perform for
most efficiently retrieving the data required by a query.
The EXPLAIN PLAN statement gathers execution plans chosen by the Oracle optimizer for
the SELECT, UPDATE, INSERT, and DELETE statements. The steps of the execution plan are
not performed in the order in which they are numbered. There is a parent-child relationship
between steps. The row source tree is the core of the execution plan. It shows the following
information: • An ordering of the tables referenced by the statement
• An access method for each table mentioned in the statement
• A join method for tables affected by join operations in the statement
• Data operations, such as filter, sort, or aggregation
In addition to the row source tree (or data flow tree for parallel operations), the plan table
contains information about the following:
• Optimization, such as the cost and cardinality of each operation
• Partitioning, such as the set of accessed partitions
• Parallel execution, such as the distribution method of join inputs
The EXPLAIN PLAN results help you determine whether the optimizer selects a particular When you tune a SQL statement in an OLTP environment, the goal is to drive from the table
that has the most selective filter. This means that there are fewer rows passed to the next
step. If the next step is a join, this means fewer rows are joined. Check to see whether the
access paths are optimal. When you examine the optimizer execution plan, check to confirm
the following:
• The plan is such that the driving table has the best filter.
• The join order in each step means that the fewest number of rows are returned to the
next step (that is, the join order should reflect going to the best not-yet-used filters).
• The join method is appropriate for the number of rows being returned. For example,
nested loops joins through indexes may not be optimal when many rows are returned.
• Views are used efficiently. Look at the SELECT list to see whether access to the view is
necessary.
• There are any unintentional Cartesian products (even with small tables).
• Each table is being accessed efficiently. Consider the predicates in the SQL statement and the number of rows in the table. Look for suspicious activity, such as a full table
scan on tables with large number of rows, which have predicates in the WHERE clause.
Also, a full table scan might be more efficient on a small table, or to leverage a better
join method (for example, hash join) for the number of rows returned.
If any of these conditions are not optimal, consider restructuring the SQL statement or the
indexes available on the tables. There are many ways to retrieve execution plans inside the database. The most well known
ones are listed in the slide:
• The EXPLAIN PLAN command enables you to view the execution plan that the
opt
In this first of a series of presentations, we'll overview the differences between SQL and PL/SQL, and the first steps in optimization, as understanding RULE vs. COST, and how to slash 90% response time in data extractions running in SQL*Plus.
Slides from the Oracle ANZ workshop held in Sydney and Melbourne. We look at the killer features that will make 18c and 19c great productivity upgrades for DBAs
Trace File Analyzer - Usage and Features Sandesh Rao
TFA or Trace File Analyzer is an Oracle tool to collect , analyze and perform machine learning on Oracle log and other data sources. This presentation covers all the options which are available with the tool and is updated to the 18.3.0 version. This is a comprehensive deck for someone who also wants to acquaint themselves with TFA
Slides from the ITOUG events in Rome and Milan 2020.
Most people think of the Flashback features in Oracle as the "In Case of Emergency" switch, to only be used when some catastrophe has occurred on your database. And while it is true that Flashback will definitely help you 3 seconds after you press the Commit button and you realise that you probably needed to have a WHERE clause on that "delete all rows from the SALES table" SQL statement. Or for when you run "drop table" on the Production database, when you were just so sure that you were logged onto the Test system. But Flashback is not only for those "Oh No!" moments. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
Another year goes by, and most likely, another data access framework has been invented. It will claim to be the fastest, smartest way to talk to the database, and just like all those that came before it, it will not be. Because the best database access tool has been there for more than 30 years now, and that is PL/SQL. Although we all sometimes fall prey to the mindset of “Oh look, a shiny new tool, we should start using it," the performance and simplicity of PL/SQL remain unmatched. This session looks at the failings of other data access languages, why even a cursory knowledge of PL/SQL will make you a better developer, and how to get the most out of PL/SQL when it comes to database performance.
Analytic SQL functions, or "window functions have been there since 8.1.6, but they are still dramatically underused by application developers. This session looks at the syntax and usage of analytic functions, and how they can supercharge your SQL skillset.
Covers analytics from their inception in 8.1.6 all the through to enhancements in 18 and 19
Sangam 19 - Successful Applications on AutonomousConnor McDonald
The autonomous database offers insane levels of performance, but you won't be able to attain that if you are not constructing your SQL statements in a way that is scalable...and more importantly, secure from hacking
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
APEX tour 2019 - successful development with autonomousConnor McDonald
The autonomous database offers insane levels of performance, but you won't be able to attain that if you are not constructing your SQL statements in a way that is scalable...and more importantly, secure from hacking
Apologies for most pics missing and awful layout...you can thank slideshare for that :-(
Slides from the APAC Groundbreakers Tour from Perth and Melbourne legs. This session covered the features in 18c, 19c and 20c, along with the new free database offerings from Oracle from OpenWorld 2019
Slides from OpenWorld. Flashback has been around for long time yet people assume it should entirely within the realm of the DBA. But with modern development techniques such as continuous integration/continuous deployment, flashback actually is a perfect for *developers*
Slides from the OpenWorld talk on read consistency. It is the feature that makes Oracle such a great database for performance and concurrency. But if misunderstood, it can lead to confusion for developers
Slides from OpenWorld 2019. Want to make sure your applications are slow, burn lots of CPU, and are easily broken into by hackers? Well...in reality, if you know how to do this, then you'll know how to avoid it.
Slides from Openworl 2019. A look at how to safely (and unsafely) kill sessions in the Oracle database, and how to perhaps avoid killing them altogether.
Flashback is not only for those "Oh No!" moments when we make a mistake. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
Latin America Tour 2019 - 10 great sql featuresConnor McDonald
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
Latin America Tour 2019 - slow data and sql processingConnor McDonald
Well done! You've come up with the killer idea for 2020. You've got the best UI design anyone has ever seen! Your modern application ticks all the boxes — serverless, functional, Kubernetes, microservices, API-based, the list goes on. It runs on every OS and every type of device. But unfortunately, all of this counts for absolutely NOTHING if your data access is slow or buggy. But an Autonomous database will fix all that right? Only if you understand the fundamentals of how SQL is processed by the database. For novice developers, SQL can be hard to understand and sometimes totally hidden from view under an ORM. Let's peel back the covers to show how SQL is processed, how to avoid getting hacked, and how to get data back to your application in a snappy fashion.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host