In any large ecosystem, there are always areas that stay in the twilight, outside of the public’s attention. This deep dive attempts to change the trend regarding two, at first glance, unrelated PL/SQL topics: hierarchical profiler (HProf) and database triggers. But if you look closer, there’s something in common: they’re significantly underused! HProf because nobody heard about it, database triggers because of decades-old stigma. Let’s put both of them back into our development toolset!
Part #1. One of the most critical FREE SQL and PL/SQL performance tuning tools is almost totally unknown! If you ask, how much time is spent on routine A? How often is function B called? Most developers would hand-code something instead of using the Oracle PL/SQL HProf. This isn’t because the provided functionality is disliked, but because developers aren’t aware of its existence! This presentation is an attempt to alter this trend and reintroduce HProf to a wider audience.
Part #2. There isn’t anything “evil” about database triggers; they just have to be used where they can actually solve problems. In this presentation, various kinds of triggers will be examined from a global system optimization view, including tradeoffs between multiple goals (e.g., depending upon the available hardware, developers can select either CPU-intensive or I/O-intensive solutions). This presentation will focus on the most common performance problems related to different kinds of DML triggers and the proper ways of resolving them.
The document discusses calling user-defined functions within SQL statements. It notes that functions may be called multiple times depending on the structure of the SQL statement. Functions in the SELECT and WHERE clauses of a query will be called independently for each row. Functions in an ORDER BY clause may also be called twice if an inline view or view is used due to query rewrite. The number of function calls can be tracked using a package to inspect execution.
Data Tracking: On the Hunt for Information about Your DatabaseMichael Rosenblum
Behind the scenes, Oracle databases hide a myriad of processes to ensure that your data can be safely stored and retrieved. These processes also leave “tracks” (or they COULD leave tracks if you set them up properly). These tracks, together with application-specific data, create a complete representation of the system’s day-to-day activity. Too often this representation is lost at the DBA/Developer borderline, mostly because one side is not aware of the needs of the other. This presentation strives to bridge this gap. It focuses on key sources of database information and techniques that are useful for both DBAs and developers:
- Data Dictionary
- Oracle Logging
- Oracle Tracing
- Advanced code instrumentation
The document discusses views in Oracle databases and how they have evolved beyond simple stored SQL queries. Views can now serve as an isolation layer between applications and tables, accept DML operations directly or through triggers, and include complex functionality through features like parameterized conditions, dynamic SQL, and INSTEAD OF triggers. The document outlines techniques for optimizing DML operations on views, such as using dynamic SQL to only update changed columns, and leveraging compound triggers for shared program logic. It also warns of performance issues that can arise from logical primary keys on views.
Managing Unstructured Data: Lobs in the World of JSONMichael Rosenblum
This document discusses managing unstructured JSON data in Oracle databases. It describes how a company initially stored JSON files in VARCHAR2 columns, but then the files grew larger than 4000 characters requiring a change to CLOB storage. This change caused issues until developers understood that CLOBs have different access, storage, and processing mechanisms compared to VARCHAR2. The document provides an overview of CLOB architecture including data access, internal storage, caching, logging, and indexing. It emphasizes that properly understanding CLOBs is important when storing and manipulating JSON data in Oracle databases.
This presentation is the attempt to switch sides and show code management from the developer's point of view. It stays outside of various VCS solutions and focuses on hands-on approaches: activity control via system triggers, conditional compilation, synonym manipulation, utilization of Edition-Based Redefinition (EBR).
DBA Commands and Concepts That Every Developer Should KnowAlex Zaballa
DBA Commands and Concepts That Every Developer Should Know was presented by Alex Zaballa, an Oracle DBA with experience in Brazil and Angola. The presentation covered Oracle Flashback Query, Flashback Table, RMAN table recovery, pending statistics, explain plan, DBMS_APPLICATION_INFO, row-by-row vs bulk processing, Virtual Private Database, extended data types, SQL text expansion, identity columns, UTL_CALL_STACK, READ privileges vs SELECT privileges, and online table redefinition. The presentation included demonstrations of many of these concepts.
The document provides guidance on optimizing PL/SQL code performance. It discusses avoiding unnecessary row-by-row processing, nested row-by-row processing, and excessive access to the DUAL table. Instead, it recommends performing set-based operations using SQL and caching frequently accessed values in memory to reduce database hits. The document also covers reducing excessive function calls and unnecessary parsing through techniques like result caching and inline views.
The Hidden Face of Cost-Based Optimizer: PL/SQL Specific StatisticsMichael Rosenblum
Database statistics are not limited to tables, columns, and indexes. PL/SQL functions also have a number of associated statistics, namely costs (CPU, I/O, network), selectivity, and cardinality (for functions that return collections). These statistics have default values that only somewhat represent reality. However, these values are always used by Oracle's cost-based optimizer to build execution plans. This session uses real-life examples to illustrate how properly managed PL/SQL statistics can significantly improve executions plans. It also demonstrates that Oracle's extensible optimizer is flexible enough to support packaged functions.
The document discusses calling user-defined functions within SQL statements. It notes that functions may be called multiple times depending on the structure of the SQL statement. Functions in the SELECT and WHERE clauses of a query will be called independently for each row. Functions in an ORDER BY clause may also be called twice if an inline view or view is used due to query rewrite. The number of function calls can be tracked using a package to inspect execution.
Data Tracking: On the Hunt for Information about Your DatabaseMichael Rosenblum
Behind the scenes, Oracle databases hide a myriad of processes to ensure that your data can be safely stored and retrieved. These processes also leave “tracks” (or they COULD leave tracks if you set them up properly). These tracks, together with application-specific data, create a complete representation of the system’s day-to-day activity. Too often this representation is lost at the DBA/Developer borderline, mostly because one side is not aware of the needs of the other. This presentation strives to bridge this gap. It focuses on key sources of database information and techniques that are useful for both DBAs and developers:
- Data Dictionary
- Oracle Logging
- Oracle Tracing
- Advanced code instrumentation
The document discusses views in Oracle databases and how they have evolved beyond simple stored SQL queries. Views can now serve as an isolation layer between applications and tables, accept DML operations directly or through triggers, and include complex functionality through features like parameterized conditions, dynamic SQL, and INSTEAD OF triggers. The document outlines techniques for optimizing DML operations on views, such as using dynamic SQL to only update changed columns, and leveraging compound triggers for shared program logic. It also warns of performance issues that can arise from logical primary keys on views.
Managing Unstructured Data: Lobs in the World of JSONMichael Rosenblum
This document discusses managing unstructured JSON data in Oracle databases. It describes how a company initially stored JSON files in VARCHAR2 columns, but then the files grew larger than 4000 characters requiring a change to CLOB storage. This change caused issues until developers understood that CLOBs have different access, storage, and processing mechanisms compared to VARCHAR2. The document provides an overview of CLOB architecture including data access, internal storage, caching, logging, and indexing. It emphasizes that properly understanding CLOBs is important when storing and manipulating JSON data in Oracle databases.
This presentation is the attempt to switch sides and show code management from the developer's point of view. It stays outside of various VCS solutions and focuses on hands-on approaches: activity control via system triggers, conditional compilation, synonym manipulation, utilization of Edition-Based Redefinition (EBR).
DBA Commands and Concepts That Every Developer Should KnowAlex Zaballa
DBA Commands and Concepts That Every Developer Should Know was presented by Alex Zaballa, an Oracle DBA with experience in Brazil and Angola. The presentation covered Oracle Flashback Query, Flashback Table, RMAN table recovery, pending statistics, explain plan, DBMS_APPLICATION_INFO, row-by-row vs bulk processing, Virtual Private Database, extended data types, SQL text expansion, identity columns, UTL_CALL_STACK, READ privileges vs SELECT privileges, and online table redefinition. The presentation included demonstrations of many of these concepts.
The document provides guidance on optimizing PL/SQL code performance. It discusses avoiding unnecessary row-by-row processing, nested row-by-row processing, and excessive access to the DUAL table. Instead, it recommends performing set-based operations using SQL and caching frequently accessed values in memory to reduce database hits. The document also covers reducing excessive function calls and unnecessary parsing through techniques like result caching and inline views.
The Hidden Face of Cost-Based Optimizer: PL/SQL Specific StatisticsMichael Rosenblum
Database statistics are not limited to tables, columns, and indexes. PL/SQL functions also have a number of associated statistics, namely costs (CPU, I/O, network), selectivity, and cardinality (for functions that return collections). These statistics have default values that only somewhat represent reality. However, these values are always used by Oracle's cost-based optimizer to build execution plans. This session uses real-life examples to illustrate how properly managed PL/SQL statistics can significantly improve executions plans. It also demonstrates that Oracle's extensible optimizer is flexible enough to support packaged functions.
This document discusses techniques for detecting and preventing SQL injection using the Percona Toolkit and Noinject!. It begins by introducing SQL injection and how attackers can modify SQL queries without changing server code. It then discusses using query fingerprints to detect new queries that may indicate injection attempts. The Percona Toolkit tools pt-query-digest and pt-fingerprint are used to generate and store fingerprints in a whitelist. Pt-query-digest can detect new fingerprints that have not been reviewed. The Noinject! proxy script uses fingerprints to inspect queries in real-time and block any that do not match whitelisted patterns. The document concludes by discussing limitations and ways to improve the fingerprinting approach.
Oracle Database 12.1.0.2 introduced several new features including approximate count distinct, full database caching, pluggable database (PDB) improvements like cloning and state management, JSON support, data redaction, SQL query row limits and offsets, invisible columns, SQL text expansion, calling PL/SQL from SQL, session level sequences, and extended data types support.
Hello Everyone ! Hope everybody doing good in their work and with their busy life.
Today i am listing down some interesting ORA- errors which i found recently as a Beginner, My Good Luck i have solved those too. So, here i am Listing down the errors with solutions.
It happens when you work with oracle, you may face or might be facing.
So, guys ! Be fearless. Have a look over it. If you need any help, Please Please let me know..
Thankyou.
This document provides an overview of database administration tasks in Oracle including creating databases and tablespaces, managing users, granting and revoking privileges, managing passwords, and using roles. The key points covered are:
- How to create databases and tablespaces using the CREATE statements.
- How to create users with the CREATE USER statement and initialize passwords.
- The types of privileges (system and object) and how to grant privileges to users using the GRANT statement.
- How to change user passwords using the ALTER USER statement.
- How to group related privileges as roles and grant roles to users.
- How to revoke privileges from users using the REVOKE statement.
Flex Cluster e Flex ASM - GUOB Tech Day - OTN TOUR LA Brazil 2014Alex Zaballa
The document discusses Oracle Flex Cluster and Flex ASM configurations. A Flex Cluster allows running Oracle databases on hub and leaf nodes, where leaf nodes do not require direct access to storage. It also discusses converting existing clusters to Flex Clusters and Flex ASM. Key aspects covered include the use of Grid Naming Service for Flex Clusters, capabilities of hub and leaf nodes, and enhancements in Flex ASM such as larger LUN size support and password file storage in ASM.
Oracle Data redaction - GUOB - OTN TOUR LA - 2015Alex Zaballa
The document summarizes a presentation on Oracle Data Redaction given by Alex Zaballa. It discusses how data redaction in Oracle Database 12c and 11.2.0.4 enables protection of data shown to users in real time without application changes. Redaction policies can be created to redact specific columns for selected users or roles. The document provides examples of redaction methods and considerations for using data redaction with operations like Data Pump and CREATE TABLE AS SELECT.
Database & Technology 1 _ Tom Kyte _ Efficient PL SQL - Why and How to Use.pdfInSync2011
Thomas Kyte discusses effective techniques for writing PL/SQL code. Some key points:
1) Use PL/SQL for data manipulation as it is tightly coupled with SQL and most efficient.
2) Write as little code as possible by leveraging SQL and thinking in sets rather than loops.
3) Use static SQL where possible for compile-time checking and dependency tracking. Dynamic SQL should only be used when static SQL is impractical.
4) Leverage packages to reduce dependencies, increase modularity, and support overloading and encapsulation.
5) Employ bulk processing techniques like bulk collects to minimize round trips to the database.
These are the slides from the defcon talk title 'The making of 2nd sql injection worm'. Refer to the video presentations uploaded on www.notsosecure.com.
Database & Technology 1 _ Tom Kyte _ SQL Techniques.pdfInSync2011
- The document discusses various SQL techniques, including using rownum, scalar subqueries, analytics, hints, materialized views, and merge statements.
- It emphasizes that the schema matters greatly for query performance and provides examples showing performance differences between index organized tables and heap tables.
- It also notes that you need to continually learn new things as SQL and Oracle evolve over time, providing examples of newer SQL features and hints.
This document provides an overview of the Oracle Enterprise Manager Command Line Interface (EM CLI).
It discusses the different modes of EM CLI including standard, interactive, and scripting modes. It also covers EM CLI verbs, formatting output, fetching information from the EM repository, and provides examples of Bash and Python scripts using EM CLI.
Sample scripts demonstrated include clearing stateless alerts, changing database passwords, and promoting unmanaged databases to managed targets. Fundamentals of Python programming are also introduced for effective EM CLI scripting.
- Session 17 is blocking session 26 from updating a row in the test_row_lock table as it currently holds a lock on that row.
- Session 1 inserted a row into the test_unique_insert_row_lock table and is blocking session 36 from inserting a duplicate value into the same table until session 1 commits.
- Lock trees in Oracle represent multiple sessions waiting to acquire the same row lock, with sessions lower in the tree waiting on those above it.
The document discusses stored procedures and triggers in databases, noting that stored procedures are reusable SQL code stored on the database that can increase performance, while triggers automatically run SQL code in response to changes made to a database table, such as inserts, updates or deletes. Both stored procedures and triggers can help with tasks like validation, auditing, and increasing performance by reducing traffic between applications and databases.
The document discusses the EXPLAIN statement in MySQL. It provides examples of using the traditional EXPLAIN output and the new JSON format for EXPLAIN. The JSON format provides more detailed information about the query plan and execution in a structured format. It allows seeing things like how conditions are split and when subqueries are evaluated.
This document discusses dependency injection in Spring Framework. It covers setter injection, constructor injection, and method injection using both XML and annotation-based configurations. Setter injection allows injecting dependencies into properties through setter methods. Constructor injection injects dependencies through a class's constructor. Method injection replaces or augments existing methods at runtime. Both setter and constructor injection can be used with XML's <property> and <constructor-arg> tags or with annotations like @Autowired on setter methods or constructors. Method injection replaces or augments methods using the <replaced-method> or <lookup-method> tags in XML.
By using specially crafted parameters in double quotes, it is possible to bypass the input validation of the Oracle dbms_assert package and inject SQL code. This allows dozens of already patched Oracle vulnerabilities to be exploited again across versions 8.1.7.4 to 10.2.0.2. The researcher notified Oracle of the problem in April 2006. To mitigate risks, privileges like CREATE PROCEDURE should be revoked to prevent injection of malicious functions or procedures.
Adrian Hardy's slides from PHPNW08
Once you have your query returning the correct results, speed becomes an important factor. Speed can either be an issue from the outset, or can creep in as your dataset grows. Understanding the EXPLAIN command is essential to helping you solve and even anticipate slow queries.
Associated video: http://blip.tv/file/1791781
Introduction to MySQL Query Tuning for Dev[Op]sSveta Smirnova
To get data, we query the database. MySQL does its best to return requested bytes as fast as possible. However, it needs human help to identify what is important and should be accessed in the first place.
Queries, written smartly, can significantly outperform automatically generated ones. Indexes and Optimizer statistics, not limited to the Histograms only, help to increase the speed of the query a lot.
In this session, I will demonstrate by examples of how MySQL query performance can be improved. I will focus on techniques, accessible by Developers and DevOps rather on those which are usually used by Database Administrators. In the end, I will present troubleshooting tools which will help you to identify why your queries do not perform. Then you could use the knowledge from the beginning of the session to improve them.
IEEE Day 2013 Oracle Database 12c: new features for developersRamin Orujov
Ramin Orujov gave a presentation on new features in Oracle Database 12c for developers. He discussed new SQL features like using sequences as default column values, identity columns supporting 32K limits for VARCHAR2 and NVARCHAR2 data types, and LIMIT and OFFSET for paging. New PL/SQL features included returning result sets from procedures, modularity with the ACCESSIBLE BY clause, and result caching for invoker rights functions. For Java, the presentation covered PL/SQL boolean support in JDBC and package-level collection support.
This document provides an overview and programming tips for using SQL procedural language (SQL PL) stored procedures on DB2 for z/OS. It discusses various features and enhancements for SQL PL including compound blocks, templates, dynamic SQL, XML support, array data types, global variables, and autonomous transactions. The document also provides examples and best practices for writing SQL procedures, including handling naming resolution, using templates for readability, and working with arrays and dynamic SQL.
Bringing Oracle databases to the cloud involves major tectonic shifts: (1) hardware resources are no longer static and (2) expense model is “pay-per-use”. Previously, as long as your current servers were surviving the workload, no one cared whether they were under-utilized. Now, this difference can be immediately monetized because the resource elasticity means that you can give it back. As a result, the total quality of the code base (+performance tuning) has a direct impact on cost. This presentation will share some of the corresponding best practices: code instrumentation, profiling, code management, resource optimization etc. Overall, you can make your system cloud-friendly, but doing so takes explicit effort and serious thinking!
Watch Re-runs on your SQL Server with RML Utilitiesdpcobb
RML Utilities provide command line tools and interactive reports enabling you to:
Take SQL trace files (captured with SQL Profiler, sp_trace or extended events in SQL 2012+),
Process them into replayable RML files (using readtrace.exe),
Play them back in a different SQL environment (using ostress.exe),
And compare the performance at a granular level (using reporter.exe or custom queries).
This document discusses techniques for detecting and preventing SQL injection using the Percona Toolkit and Noinject!. It begins by introducing SQL injection and how attackers can modify SQL queries without changing server code. It then discusses using query fingerprints to detect new queries that may indicate injection attempts. The Percona Toolkit tools pt-query-digest and pt-fingerprint are used to generate and store fingerprints in a whitelist. Pt-query-digest can detect new fingerprints that have not been reviewed. The Noinject! proxy script uses fingerprints to inspect queries in real-time and block any that do not match whitelisted patterns. The document concludes by discussing limitations and ways to improve the fingerprinting approach.
Oracle Database 12.1.0.2 introduced several new features including approximate count distinct, full database caching, pluggable database (PDB) improvements like cloning and state management, JSON support, data redaction, SQL query row limits and offsets, invisible columns, SQL text expansion, calling PL/SQL from SQL, session level sequences, and extended data types support.
Hello Everyone ! Hope everybody doing good in their work and with their busy life.
Today i am listing down some interesting ORA- errors which i found recently as a Beginner, My Good Luck i have solved those too. So, here i am Listing down the errors with solutions.
It happens when you work with oracle, you may face or might be facing.
So, guys ! Be fearless. Have a look over it. If you need any help, Please Please let me know..
Thankyou.
This document provides an overview of database administration tasks in Oracle including creating databases and tablespaces, managing users, granting and revoking privileges, managing passwords, and using roles. The key points covered are:
- How to create databases and tablespaces using the CREATE statements.
- How to create users with the CREATE USER statement and initialize passwords.
- The types of privileges (system and object) and how to grant privileges to users using the GRANT statement.
- How to change user passwords using the ALTER USER statement.
- How to group related privileges as roles and grant roles to users.
- How to revoke privileges from users using the REVOKE statement.
Flex Cluster e Flex ASM - GUOB Tech Day - OTN TOUR LA Brazil 2014Alex Zaballa
The document discusses Oracle Flex Cluster and Flex ASM configurations. A Flex Cluster allows running Oracle databases on hub and leaf nodes, where leaf nodes do not require direct access to storage. It also discusses converting existing clusters to Flex Clusters and Flex ASM. Key aspects covered include the use of Grid Naming Service for Flex Clusters, capabilities of hub and leaf nodes, and enhancements in Flex ASM such as larger LUN size support and password file storage in ASM.
Oracle Data redaction - GUOB - OTN TOUR LA - 2015Alex Zaballa
The document summarizes a presentation on Oracle Data Redaction given by Alex Zaballa. It discusses how data redaction in Oracle Database 12c and 11.2.0.4 enables protection of data shown to users in real time without application changes. Redaction policies can be created to redact specific columns for selected users or roles. The document provides examples of redaction methods and considerations for using data redaction with operations like Data Pump and CREATE TABLE AS SELECT.
Database & Technology 1 _ Tom Kyte _ Efficient PL SQL - Why and How to Use.pdfInSync2011
Thomas Kyte discusses effective techniques for writing PL/SQL code. Some key points:
1) Use PL/SQL for data manipulation as it is tightly coupled with SQL and most efficient.
2) Write as little code as possible by leveraging SQL and thinking in sets rather than loops.
3) Use static SQL where possible for compile-time checking and dependency tracking. Dynamic SQL should only be used when static SQL is impractical.
4) Leverage packages to reduce dependencies, increase modularity, and support overloading and encapsulation.
5) Employ bulk processing techniques like bulk collects to minimize round trips to the database.
These are the slides from the defcon talk title 'The making of 2nd sql injection worm'. Refer to the video presentations uploaded on www.notsosecure.com.
Database & Technology 1 _ Tom Kyte _ SQL Techniques.pdfInSync2011
- The document discusses various SQL techniques, including using rownum, scalar subqueries, analytics, hints, materialized views, and merge statements.
- It emphasizes that the schema matters greatly for query performance and provides examples showing performance differences between index organized tables and heap tables.
- It also notes that you need to continually learn new things as SQL and Oracle evolve over time, providing examples of newer SQL features and hints.
This document provides an overview of the Oracle Enterprise Manager Command Line Interface (EM CLI).
It discusses the different modes of EM CLI including standard, interactive, and scripting modes. It also covers EM CLI verbs, formatting output, fetching information from the EM repository, and provides examples of Bash and Python scripts using EM CLI.
Sample scripts demonstrated include clearing stateless alerts, changing database passwords, and promoting unmanaged databases to managed targets. Fundamentals of Python programming are also introduced for effective EM CLI scripting.
- Session 17 is blocking session 26 from updating a row in the test_row_lock table as it currently holds a lock on that row.
- Session 1 inserted a row into the test_unique_insert_row_lock table and is blocking session 36 from inserting a duplicate value into the same table until session 1 commits.
- Lock trees in Oracle represent multiple sessions waiting to acquire the same row lock, with sessions lower in the tree waiting on those above it.
The document discusses stored procedures and triggers in databases, noting that stored procedures are reusable SQL code stored on the database that can increase performance, while triggers automatically run SQL code in response to changes made to a database table, such as inserts, updates or deletes. Both stored procedures and triggers can help with tasks like validation, auditing, and increasing performance by reducing traffic between applications and databases.
The document discusses the EXPLAIN statement in MySQL. It provides examples of using the traditional EXPLAIN output and the new JSON format for EXPLAIN. The JSON format provides more detailed information about the query plan and execution in a structured format. It allows seeing things like how conditions are split and when subqueries are evaluated.
This document discusses dependency injection in Spring Framework. It covers setter injection, constructor injection, and method injection using both XML and annotation-based configurations. Setter injection allows injecting dependencies into properties through setter methods. Constructor injection injects dependencies through a class's constructor. Method injection replaces or augments existing methods at runtime. Both setter and constructor injection can be used with XML's <property> and <constructor-arg> tags or with annotations like @Autowired on setter methods or constructors. Method injection replaces or augments methods using the <replaced-method> or <lookup-method> tags in XML.
By using specially crafted parameters in double quotes, it is possible to bypass the input validation of the Oracle dbms_assert package and inject SQL code. This allows dozens of already patched Oracle vulnerabilities to be exploited again across versions 8.1.7.4 to 10.2.0.2. The researcher notified Oracle of the problem in April 2006. To mitigate risks, privileges like CREATE PROCEDURE should be revoked to prevent injection of malicious functions or procedures.
Adrian Hardy's slides from PHPNW08
Once you have your query returning the correct results, speed becomes an important factor. Speed can either be an issue from the outset, or can creep in as your dataset grows. Understanding the EXPLAIN command is essential to helping you solve and even anticipate slow queries.
Associated video: http://blip.tv/file/1791781
Introduction to MySQL Query Tuning for Dev[Op]sSveta Smirnova
To get data, we query the database. MySQL does its best to return requested bytes as fast as possible. However, it needs human help to identify what is important and should be accessed in the first place.
Queries, written smartly, can significantly outperform automatically generated ones. Indexes and Optimizer statistics, not limited to the Histograms only, help to increase the speed of the query a lot.
In this session, I will demonstrate by examples of how MySQL query performance can be improved. I will focus on techniques, accessible by Developers and DevOps rather on those which are usually used by Database Administrators. In the end, I will present troubleshooting tools which will help you to identify why your queries do not perform. Then you could use the knowledge from the beginning of the session to improve them.
IEEE Day 2013 Oracle Database 12c: new features for developersRamin Orujov
Ramin Orujov gave a presentation on new features in Oracle Database 12c for developers. He discussed new SQL features like using sequences as default column values, identity columns supporting 32K limits for VARCHAR2 and NVARCHAR2 data types, and LIMIT and OFFSET for paging. New PL/SQL features included returning result sets from procedures, modularity with the ACCESSIBLE BY clause, and result caching for invoker rights functions. For Java, the presentation covered PL/SQL boolean support in JDBC and package-level collection support.
This document provides an overview and programming tips for using SQL procedural language (SQL PL) stored procedures on DB2 for z/OS. It discusses various features and enhancements for SQL PL including compound blocks, templates, dynamic SQL, XML support, array data types, global variables, and autonomous transactions. The document also provides examples and best practices for writing SQL procedures, including handling naming resolution, using templates for readability, and working with arrays and dynamic SQL.
Bringing Oracle databases to the cloud involves major tectonic shifts: (1) hardware resources are no longer static and (2) expense model is “pay-per-use”. Previously, as long as your current servers were surviving the workload, no one cared whether they were under-utilized. Now, this difference can be immediately monetized because the resource elasticity means that you can give it back. As a result, the total quality of the code base (+performance tuning) has a direct impact on cost. This presentation will share some of the corresponding best practices: code instrumentation, profiling, code management, resource optimization etc. Overall, you can make your system cloud-friendly, but doing so takes explicit effort and serious thinking!
Watch Re-runs on your SQL Server with RML Utilitiesdpcobb
RML Utilities provide command line tools and interactive reports enabling you to:
Take SQL trace files (captured with SQL Profiler, sp_trace or extended events in SQL 2012+),
Process them into replayable RML files (using readtrace.exe),
Play them back in a different SQL environment (using ostress.exe),
And compare the performance at a granular level (using reporter.exe or custom queries).
This document outlines an agenda for a talk on 30 LotusScript tips. It begins with an introduction that explains the structure of the talk and why it will cover 30 tips. The agenda is then broken into sections on fundamental tips ("Nursery Slopes"), less well known everyday tips, and more advanced tips to provoke thought. Each section provides 10 tips on topics like using Option Declare, templates and versions, error handling, classes, and performance coding. The document aims to share a variety of LotusScript best practices and techniques.
BP206 - Let's Give Your LotusScript a Tune-Up Craig Schumann
The document provides tips for improving LotusScript performance by identifying and addressing bottlenecks. It discusses identifying where performance issues occur, tools for measuring performance such as using the Now function and GetThreadInfo, and avoiding common pitfalls like overuse of GetNthDocument and unnecessary code. The document emphasizes the importance of measuring performance empirically before optimizing code.
The document discusses execution plans in Oracle databases. It provides information on how to view predicted and actual execution plans, including using EXPLAIN PLAN, AUTOTRACE, and querying dynamic views. It also describes how to capture execution plans and bind variables from trace files using tools like TKPROF.
This document discusses Oracle wait events. It explains that wait events track where Oracle is spending its time, including different types of waits like CPU time, I/O events, enqueue events and latch events. It provides examples of specific wait events like db file sequential read, direct path write, log file sync and buffer busy waits. It also gives recommendations for interpreting wait event data and resolving high wait times through methods like tuning SQL, improving I/O speeds, and reducing contention.
This document outlines an agenda for a talk titled "AD505 DevBlast – 30 LotusScript Tips". The talk aims to provide 30 LotusScript tips organized into three sections: Nursery Slopes (10 fundamental tips), Everyday Tips (10 less well known tips), and Advanced Tips (10 tips to provoke thought). The introduction provides background on the speaker and explains the goal of sharing 30 tips using the "Lazy Programmer" methodology of breaking problems into manageable pieces.
This document summarizes a presentation on database optimization techniques for DBAs. It discusses using reports like AWR, ASH, and ADDM to analyze performance issues. It also covers using explain plans and trace files to diagnose problems. Specific troubleshooting steps are provided for examples involving parallel processing issues, performance degradation after an upgrade, and temporary space usage. The presentation emphasizes using data from tools like these to identify and address real performance problems, rather than superficial "tinsel" optimizations.
Catalyst - refactor large apps with it and have fun!mold
This document discusses refactoring a large Perl application using Catalyst. Some key points:
1) The existing application was built over time by many people and contained inconsistencies, bugs and hacks. Refactoring with Catalyst aimed to make the code more maintainable, easier to work with, and fun to develop.
2) Catalyst provides an MVC framework and conventions that help split code into logical modules and provide common web functionality out of the box.
3) There was an initial steep learning curve to understand Catalyst and choose supporting libraries, but Template Toolkit, DBIx::Class and other CPAN modules helped simplify tasks like templates, object-relational mapping and handling web requests
1) The document discusses Oracle database auditing features before and after version 12.1. It describes migrating the audit trail to the unified audit trail and using the SYS.UNIFIED_AUDIT_TRAIL table.
2) It provides steps to configure syslog auditing on Linux for Oracle database audit records. Procedures are created to output messages to syslog and call it from a fine-grained auditing policy handler.
3) An example fine-grained auditing policy is created to audit access to the SECDEMO.CUSTOMER table and call the syslog handler for non-application users.
This document provides an overview of SQL tuning concepts and tools in Oracle Database. It discusses the differences between database tuning and SQL tuning. It also covers diagnostic tools like SQL Trace, ASH, EXPLAIN PLAN, AUTOTRACE, and SQL Developer. Active monitoring tools like AWR, SQL Monitor and reactive tools like SQL Diagnostic Tool and SQLD360 are also mentioned. Additional topics include full table scans, adaptive features, statistics, hints, pending statistics, restoring statistics history, and invisible indexes.
PHP classes in mumbai, Introduction to PHP/MYSQL..
best PHP/MYSQL classes in mumbai with job assistance.
our features are:
expert guidance by IT industry professionals
lowest fees of 5000
practical exposure to handle projects
well equiped lab
after course resume writing guidance
For more Visit: http://vibranttechnologies.co.in/php-classes-in-mumbai.html or http://phptraining.vibranttechnologies.co.in
You need to write a script you can call from cron to upload a directory of files to S3. Or perhaps zip log files and E-mail them? Or import a CSV into the DB. What do you use? Bash? Python? Node? No silly, you use CFML! ColdFusion developers have been able to write pure CLI scripts with CommandBox CLI for years now and it beats the pants of bash or Node. There's tools for creating interactive wizards, progress bar animations, colored console text output, and easy parameter handling. And the best thing is, CommandBox Task Runners are written in CFML so they can do anything CFML can do. Come learn how quick and easy Task Runners are to use so CFML can become the go-to language to use for anything.
This document provides DDL scripts to create tables and constraints for the HR schema in an Oracle database. It includes scripts to create tables, primary keys, unique constraints, foreign keys, and check constraints. It also describes installing the Oracle database software and creating a database called ORCL using the Database Configuration Assistant.
This document provides 20 tips and techniques for SAS programming. It begins with tips for creating pivot tables from SAS data in Excel and using Visual Basic scripts. It then provides tips on debugging complex macros, adding operators to the macro language, and using progress bars in SAS. The document continues sharing many other tips for optimizing SAS code, such as using views to improve efficiency, finding secret SAS options, using pipes to process large files, and formatting tables to visually indicate high and low values.
Oracle Database 12c - The Best Oracle Database 12c Tuning Features for Develo...Alex Zaballa
Oracle Database 12c includes many new tuning features for developers and DBAs. Some key features include:
- Multitenant architecture allows multiple pluggable databases to consolidate workloads on a single database instance for improved utilization and administration.
- In-memory column store enables real-time analytics on frequently accessed data held entirely in memory for faster performance.
- New SQL syntax like FETCH FIRST for row limiting and offsetting provides more readable and intuitive replacements for previous techniques.
- Adaptive query optimization allows queries to utilize different execution plans like switching between nested loops and hash joins based on runtime statistics for improved performance.
The document discusses how REST APIs and ORDS can help DBAs adopt more agile practices. It provides examples of how DBAs can expose database operations and metadata via REST endpoints to improve communication and automation between developers and DBAs. This includes endpoints for checking database connectivity, putting applications in maintenance mode, retrieving backup status, creating/deleting restore points, refreshing schemas, and more. The document argues that REST and ORDS can help make DBAs more agile by standardizing their operations and facilitating integration with other tools and services.
Presenter: Dean Richards of Confio Software
If you're a developer or DBA, this presentation will outline a method for determining the best execution plan for a query every time by utilizing SQL Diagramming techniques.
Whether you're a beginner or expert, this approach will save you countless hours tuning a query.
You Will Learn:
* SQL Tuning Methodology
* Response Time Tuning Practices
* How to use SQL Diagramming techniques to tune SQL statements
* How to read executions plans
OOW16 - Oracle Database 12c - The Best Oracle Database 12c New Features for D...Alex Zaballa
Oracle Database 12c introduces many new features for developers and DBAs. These include native support for JSON, data redaction capabilities, improved SQL query functionality using row limits and offsets, and new PL/SQL features like calling functions from SQL. The presentation provides demonstrations of these new features.
Similar to Hidden Gems of Performance Tuning: Hierarchical Profiler and DML Trigger Optimization (20)
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
Open Source Contributions to Postgres: The Basics POSETTE 2024
Hidden Gems of Performance Tuning: Hierarchical Profiler and DML Trigger Optimization
1. 1 of 92
Hidden Gems of Performance Tuning:
Hierarchical Profiler
and
DML Trigger Optimization
Michael Rosenblum
www.dulcian.com
2. 2 of 92
Who Am I? – “Misha”
Oracle ACE
Co-author of 3 books
PL/SQL for Dummies
Expert PL/SQL Practices
Oracle PL/SQL Performance Tuning Tips & Techniques
Known for:
SQL and PL/SQL tuning
Complex functionality
Code generators
Repository-based development
3. 3 of 92
Yet another performance
presentation???
NO!
Because:
I will NOT talk about bind variables
… more than a few [dozen] times
I will NOT mention extra paid options/products.
Well…I am a [database] doctor, not a [salesman?] (c) Star Trek
I will NOT be buzzword-compliant
… so you can be [mostly] CLOUD- and EXADATA-free.
5. 5 of 92
Tuning (CFO Level)
Means:
Ensuring that available resources are used in the most efficient
way:
No wasted resources
No under-utilized resources
Impact:
Makes CFO happy when they look at hardware costs
7. 7 of 92
Reality Check
End-users
DON’T CARE ABOUT:
CPU utilization/disk workload/etc.
Being buzzword-compliant by using the coolest technology stack
DO CARE ABOUT:
Being able to run their business
… i.e. monthly report should not take two months to prepare!
Time wasted looking at an hourglass on the screen
… although the notion of “wasted time” can be managed by using various
psychological tricks (managing expectations!).
8. 8 of 92
So?
It’s all about end-user requests…
3. Application
Server
2. Send data from
Client to app server
5. Database
6. Return Data from
database to app server
1. Client
4. Send data from
app server to database
7. Data in
Application Server
8. Return data from
app server to client
9. Data in
client
… and time is lost here
9. 9 of 92
Let’s assume….
You’ve proven that IT IS a database problem
... and not network traffic/slow client/etc.
… and not the number of round trips from the application server!
You can modify database-related code
Best case: You know how to use a “thick database approach”
… i.e. you have high level PL/SQL APIs (that call various SQL queries)
…and these APIs are called by everybody else (UI/reports/BI/etc.)
Worst case: If needed, you can add diagnostic PL/SQL calls around
SQL.
10. 10 of 92
A Perfect World
Database
API response
API call
PROCEDURE p_DoSomething IS
BEGIN
p_doSomethingElse1;
sql_1;
p_doSomethingElse2;
sql_2;
…
END;
11. 11 of 92
Less Than Perfect World
Database
SQL
or
PL/SQL
Application Server
void doSomething
{
…
doSomethingElse;
…
}
12. 12 of 92
THE Problem
Database is spending too much time doing something:
Perfect Case [one SQL statement that does not contain any
user-defined functions]
Many monitoring mechanisms
Many ways to adjust
Lots of coverage
Real case [combination of SQL and PL/SQL]
Hierarchical in its nature something is calling something that is
calling something else
Cannot be represented as a sequence of simple cases!
14. 14 of 92
What can it do for you?
PL/SQL Hierarchical Profiler:
Gathers hierarchical statistics of all calls (both SQL and
PL/SQL) for the duration of the monitoring
… into a portable trace file
Has powerful aggregation utilities
… both within the database and using a command-line interface
Available since Oracle 11.1 [replaced PL/SQL Profiler]
… and constantly improved (significantly changed in 18c!)
15. 15 of 92
Introductory Case
Background:
You have multiple PL/SQL program units calling each other that
have SQL statements within them.
Problem:
You need to know where time is wasted and where it would be
best to spend time on tuning.
16. 16 of 92
Intro (1)
SQL> CREATE DIRECTORY IO AS 'C:IO';
SQL> exec dbms_hprof.start_profiling
(location=>'IO',filename=>'HProf.txt');
SQL> DECLARE
2 PROCEDURE p_doSomething (pi_empno NUMBER) IS
3 BEGIN
4 dbms_lock.sleep(0.1);
5 END;
6 PROCEDURE p_main IS
7 BEGIN
8 dbms_lock.sleep(0.5);
9 FOR c IN (SELECT * FROM emp) LOOP
10 p_doSomething(c.empno);
11 END LOOP;
12 END;
13 BEGIN
14 p_main();
15 END;
16 /
SQL> exec dbms_hprof.stop_profiling;
Destination folder:
WRITE is enough
Spend time
17. 17 of 92
Intro (2)
Raw file (C:IOHProf.txt) is not very readable…
P#V PLSHPROF Internal Version 1.0
P#! PL/SQL Timer Started
P#C PLSQL."".""."__plsql_vm"
P#X 8
P#C PLSQL."".""."__anonymous_block"
P#X 6
P#C PLSQL."".""."__anonymous_block.P_MAIN"#980980e97e42f8ec #6
P#X 63
P#C PLSQL."SYS"."DBMS_LOCK"::9."__pkg_init"
P#X 7
P#R
P#X 119
P#C PLSQL."SYS"."DBMS_LOCK"::11."SLEEP"#e17d780a3c3eae3d #197
P#X 500373
P#R
P#X 586
P#C SQL."".""."__sql_fetch_line9" #9."4ay6mhcbhvbf2"
P#! SELECT * FROM SCOTT.EMP
P#X 3791
P#R
P#X 17
<<… and so on …>>
Call
Elapsed time
between events
Return
from
sub-program
18. 18 of 92
Intro (3)
… but you can and make it readable via the command-line utility:
C:Utl_FileIO>plshprof -output hprof_intro HProf.txt
PLSHPROF: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0
- 64bit Production
[8 symbols processed]
[Report written to 'hprof_intro.html']
<Show files>
19. 19 of 92
Intro Findings
The results are:
All of the time is spent in DBMS_LOCK.SLEEP
…There are no descendants!
When we drill down, the SLEEP procedure was called from
multiple parent modules!
This is important because, in one case, time spent is 0.1 per call and in
the other is 0.5 per call.
Oracle 12.2+ SQL ID and first 50 characters of SQL text
Very nice, especially in the case of Dynamic SQL
Many sorting/reporting options!
20. 20 of 92
Intro (4)
… and also you can analyze the trace file via PL/SQL APIs
Pro: easier to link with SQL statistics
Contra: need extra READ privilege on the directory + need to create tables
beforehand [11g,12c – script only / 18c,19с – API or script]
DECLARE
runid NUMBER;
BEGIN
runid := DBMS_HPROF.analyze('IO','HProf.txt');
DBMS_OUTPUT.PUT_LINE('runid = ' || runid);
END;
/
DBMSHP_RUNS
Run_ID PK
…
DBMSHP_Parent_Child_Info
Run_ID PK
ParentSymID FK
ChildSymID FK
…
DBMSHP_Function_Info
SymbolID PK
Run_ID FK
Module
Type
Function
…
21. 21 of 92
Intro (5)
… btw, ANALYZE has some nice options:
Trace only specific entries
runid := DBMS_HPROF.analyze('IO','HProf.txt',
trace=> '"SCOTT"."F_CHANGE_TX"');
Trace up to N occurrences
runid := DBMS_HPROF.analyze('IO','HProf.txt',
collect => 20,
trace=> '"SCOTT"."F_CHANGE_TX"');
Trace starting from N-th occurrence
runid := DBMS_HPROF.analyze('IO','HProf.txt',
skip =>1,
trace=> '"SCOTT"."F_CHANGE_TX"');
22. 22 of 92
18c and Up – no RAW Files (1)
Raw data can be generated into tables directly:
DBMS_HPROF.Create_Tables – new API to create all tables
script dbmshptab.sql still exists, but considered deprecated
API has FORCE_IT parameter if TRUE – drop/recreate tables
Pro:
Only limited privileges are needed to run performance analysis
directly in PROD (no file access at all!)
Contra:
Harder to move off-site for a third-party review
Data volume could get out of hands
23. 23 of 92
18c and Up – no RAW Files (2)
DECLARE
trace_id number;
runid number;
PROCEDURE p_doSomething (pi_empno NUMBER) IS
BEGIN
dbms_lock.sleep(0.1);
END;
PROCEDURE p_main IS
BEGIN
dbms_lock.sleep(0.5);
FOR c IN (SELECT * FROM scott.emp) LOOP
p_doSomething(c.empno);
END LOOP;
END;
BEGIN
trace_id := dbms_hprof.start_profiling ;
p_main();
dbms_hprof.stop_profiling;
runid:=dbms_hprof.analyze(trace_id);
dbms_output.put_line('RunId:'||runid);
END;
Runtime ID
No files!
Persistent ID
24. 24 of 92
True Story #1:
Typical Hierarchical Profiler Use
25. 25 of 92
Typical Situation
Help-desk client’s performance complaints:
Developer checked 10046 trace and couldn’t find anything
suspicious
I noticed that the core query contains a user-defined PL/SQL
function.
Action:
Wrap suspicious call in HProf start/stop in TEST instance (with
the same volume of data)
26. 26 of 92
Suspect
SQL> exec dbms_hprof.start_profiling ('IO', 'HProf_Case1.txt');
SQL> declare
2 v_tx varchar2(32767);
3 begin
4 select listagg(owner_tx,',') within group (order by 1)
5 into v_tx
6 from (
7 select distinct scott.f_change_tx(owner) owner_tx
8 from scott.test_tab
9 );
10 end;
11 /
SQL> exec dbms_hprof.stop_profiling;
1. Only 26 owners!
2. Function is doing
basic formatting
28. 28 of 92
Findings
Problem:
Time is wasted on very cheap function which is fired lots and lots
of times
… because the original developer “guessed” at the query behavior
… i.e. he knew function was doing basic formatting, so the output
would also be distinct
… but forgot to tell that to the CBO GIGO!
Solution:
Rewrite query in a way that helps the CBO
… and remind all developers:
The number of function calls in SQL will surprise you if you don’t measure
them.
29. 29 of 92
Fix
SQL> exec dbms_hprof.start_profiling ('IO', 'HProf_Case1_fix.txt');
SQL> declare
2 v_tx varchar2(32767);
3 begin
4 select listagg(owner_tx,',') within group (order by 1)
5 into v_tx
6 from (
7 select scott.f_change_tx(owner) owner_tx
8 from (select distinct owner
9 from scott.test_tab)
10 );
11 end;
12 /
SQL> exec dbms_hprof.stop_profiling
Filter first!
<Show files>
32. 32 of 92
Running directly from Java?
Good news:
It works!
You can run multiple statements between START and STOP
Bad news:
No SQL IDs if they run directly (at least we couldn’t get it)
confused statistics
Environment: JDeveloper 11g
33. 33 of 92
Java Sample
String sql =
"begin dbms_hprof.start_profiling (location=>'IO',filename=>'Case1a.txt'); end;";
CallableStatement stmt = conn.prepareCall(sql);
stmt.execute();
PreparedStatement stmt2 =
conn.prepareStatement("select listagg(owner_tx,',') within group (order by 1) result n" +
"from (select distinct scott.f_change_tx(owner) owner_txn" +
" from scott.test_tab) A ");
stmt2.execute();
stmt2 = conn.prepareStatement("select listagg(owner_tx,',') within group (order by 1) n" +
"from (select distinct scott.f_change_tx(owner) owner_txn" +
" from scott.test_tab) B ");
stmt2.execute();
sql = "begin dbms_hprof.stop_profiling; end;";
stmt = conn.prepareCall(sql);
stmt.execute();
Difference!
<Show files>
35. 35 of 92
Running directly from SQL*Plus?
Bad news: the same problem with multiple statements:
SQL> exec dbms_hprof.start_profiling
2 (location=>'IO',filename=>'Case1b_SQLPlus.txt');
SQL> select listagg(owner_tx,',') within group (order by 1) result
2 from (select distinct scott.f_change_tx(owner) owner_tx
3 from scott.test_tab a);
...
SQL> select listagg(owner_tx,',') within group (order by 1)
2 from (select distinct scott.f_change_tx(owner) owner_tx
3 from scott.test_tab b);
...
SQL> exec dbms_hprof.stop_profiling;
36. 36 of 92
Impact – SQL*Plus
100k Calls
<Show files>
No SQL IDs
(even in 19c)
38. 38 of 92
Background
Third-party module code is slow
Functionality: Take some tables and columns /return formatted
CLOB
The code is wrapped
Original developers don’t want to accept the blame.
Action:
Gather as many statistics about the module as you can
Wrap suspicious call in HProf start/stop
40. 40 of 92
Statistics (2)
SQL> exec runstats_pkg.rs_stop;
Run1 ran in 0 cpu hsecs
Run2 ran in 3195 cpu hsecs
run 1 ran in 0% of the time
Name Run1 Run2 Diff
...
STAT...physical reads direct (lob) 13 49,991 49,978
STAT...physical reads direct temporary tablespace 13 49,991 49,978
STAT...lob writes 14 50,000 49,986
STAT...physical writes direct temporary tablespace 14 50,145 50,131
STAT...physical writes direct (lob) 14 50,145 50,131
Direct Temp I/O?!?!
41. 41 of 92
Profile for the Slow Case
Explicit
“create temp”
50k calls
42. 42 of 92
Analysis
Problem #1: Direct IO for all temporary LOB operations
Could happen only if LOB variable is initiated as NOCACHE
via DBMS_LOB.createTemporary
Problem #2: IO operation for every row in conjunction
with fetch for every row
Could happen only if DBMS_LOB.writeAppend is called within
the loop
43. 43 of 92
Unwrapped code (FYI)
FUNCTION f_getData_cl(i_column_tx VARCHAR2, i_table_tx VARCHAR2) RETURN CLOB IS
v_cl CLOB;
v_tx VARCHAR2(32767);
v_cur SYS_REFCURSOR;
BEGIN
dbms_lob.createTemporary(v_cl,false,dbms_lob.call);
OPEN v_cur FOR 'SELECT '||
dbms_assert.simple_sql_name(i_column_tx)||' field_tx'||
' FROM '||dbms_assert.simple_sql_name(i_table_tx);
LOOP
FETCH v_cur into v_tx;
EXIT WHEN v_cur%notfound;
dbms_lob.writeAppend(v_cl,length(v_tx)+1,v_tx||'|');
END LOOP;
CLOSE v_cur;
RETURN v_cl;
END;
Issue #1:
no cache
Issue #2:
no buffer
<Show optimized code if time permits>
44. 44 of 92
Part1: Summary
End users only care that their requests come back quickly
… and not about CPU/Memory/IO utilization
Yes, sometimes it IS the database
… but 90% of time it isn’t
PL/SQL Hierarchical profiler lets you see the system from
the end-user angle and find real performance issues
…i.e. request-driven (with drill-down option)
PL/SQL Hierarchical profiler is constantly improving
.. i.e. don’t forget to read “New Features” guide!
46. 46 of 92
Handle with care…
Triggers
Given a lot of bad publicity
…Some Oracle gurus think that they should never have existed!
Getting more complex with each Oracle release
… Cross-edition triggers, compound triggers, event triggers…
Constantly misused
… But can be very useful when applied appropriately!
48. 48 of 92
Danger Area
Developers have different reasons to use DML triggers
… but very few are valid.
Problems start when triggers are asked to perform complex
tasks
… although, there is nothing wrong with enforcing the format of
a Social Security Number field (for example).
Any solution that requires a DML trigger to touch data
outside of the current row is questionable.
49. 49 of 92
Anti-Pattern
Requirement:
A person can have a number of mailing addresses.
Only one is designated to be “current.”
Common (wrong!) solution:
Column Current_YN is added to ADDRESS table.
Complex trigger-based solutions are needed to avoid
ORA-04091 “Table is mutating”
… and yes, I know about compound triggers
50. 50 of 92
No Triggers Here!
Correct Solution:
Add reverse key to PERSON
Person
Person_ID PK
Name_TX
PrimaryAddress_FK FK
Address
Address_ID PK
Person_FK FK
City_TX
….
0..1 0..*
0..* 0..1
>> Person may have address >>
<< One of the addresses is defined as current <<
52. 52 of 92
Single Condition
Same business logic/different implementation?
-- option 1:
ALTER TABLE emp add CONSTRAINT emp_sal_ck CHECK (sal<=5000);
-- option 2:
CREATE OR REPLACE TRIGGER emp_row_biu
BEFORE INSERT OR UPDATE ON emp FOR EACH ROW
BEGIN
IF :NEW.sal>5000 THEN
raise_application_error(-20001,
'salary cannot exceed 5000');
END IF;
END;
53. 53 of 92
With Constraint…
SQL> ALTER TABLE emp add CONSTRAINT emp_sal_ck CHECK (sal<=5000);
SQL> SET AUTOTRACE TRACEONLY EXPLAIN
SQL> SELECT * FROM emp WHERE sal > 10000;
Execution Plan
----------------------------------------------------------
Plan hash value: 1341312905
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes| Cost (%CPU)| Time
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1| 38| 0 (0)|
|* 1 | FILTER | | | | |
|* 2 | TABLE ACCESS FULL| EMP | 1| 38| 3 (0)| 00:00:01
-------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(NULL IS NOT NULL)
2 - filter("SAL">10000)
Don’t do ANYTHING!
54. 54 of 92
Without Constraint…
SQL> ALTER TABLE emp DISABLE CONSTRAINT emp_sal_ck;
SQL> select * from emp where sal > 10000;
Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932
------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 38 | 3 (0)| 00:00:01
|* 1 | TABLE ACCESS FULL| EMP | 1 | 38 | 3 (0)| 00:00:01
------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("SAL">10000) Full table scan!
55. 55 of 92
Single Condition Summary
Check constraints are part of metadata CBO impact
… as long as they are enabled and validated.
Triggers hide business logic from Oracle, but they give you
more power.
What if you need to check multiple conditions?
56. 56 of 92
Multiple Conditions - Constraints
SQL> ALTER TABLE emp
2 ADD CONSTRAINT emp_comm_ck CHECK (comm<=2000);
SQL> UPDATE emp
2 SET sal=10000,
3 comm=3000
4 WHERE empno=7902;
UPDATE emp SET sal=10000,comm=3000 WHERE empno=7902
*
ERROR at line 1:
ORA-02290: check constraint(SCOTT.EMP_COMM_CK) violated
Second condition
Only one error!
Two violations
57. 57 of 92
Multiple Conditions – Trigger
SQL> CREATE OR REPLACE TRIGGER emp_row_biu
2 BEFORE INSERT OR UPDATE ON emp
3 FOR EACH ROW
4 DECLARE
5 v_error_tx VARCHAR2(32767);
6 BEGIN
7 IF :NEW.sal>5000 THEN
8 v_error_tx:=v_error_tx||CHR(10)||
9 '* salary cannot exceed 5000';
10 END IF;
11 IF :NEW.comm>2000 THEN
12 v_error_tx:=v_error_tx||CHR(10)||
13 '* commissions cannot exceed 2000';
14 END IF;
15 IF v_error_tx IS NOT NULL THEN
16 raise_application_error(-20001,'Errors:'||v_error_tx);
17 END IF;
18 end;
19 /
Aggregator
58. 58 of 92
Multiple Conditions - Constraints
SQL> UPDATE emp
2 SET sal=10000,comm=3000
3 WHERE empno=7902;
UPDATE emp SET sal=10000,comm=3000 WHERE empno=7902
*
ERROR at line 1:
ORA-20001: Errors:
* salary cannot exceed 5000
* commissions cannot exceed 2000
ORA-06512: at "SCOTT.EMP_ROW_BIU", line 15
ORA-04088: error during execution of trigger 'SCOTT.EMP_ROW_BIU'
Both errors!
59. 59 of 92
Multiple Conditions - Summary
Triggers:
Allow multiple errors to be propagated.
Allow human-readable error messages.
Repeated warning:
Implementing cross-row conditions (“Salary Cannot exceed
150% of the average in the department”) as triggers is very
dangerous!
… just Google “read consistency”
61. 61 of 92
Populating Sequence ID
Available options:
Explicitly pass into INSERT
Before-INSERT trigger
Default=SEQUENCE.NEXTVAL (Oracle 12c+)
62. 62 of 92
Sample Set
CREATE SEQUENCE test_trigger_seq;
CREATE TABLE test_trigger_tab1
(a NUMBER,
b VARCHAR2(1));
CREATE OR REPLACE TRIGGER test_trigger_tab1_bi
BEFORE INSERT ON test_trigger_tab1 FOR EACH ROW DISABLE
BEGIN
IF :NEW.a IS NULL THEN
:NEW.a:=test_trigger_seq.NEXTVAL;
END IF;
END;
Starting disabled
63. 63 of 92
Compare Pre-12c Options
SQL> exec runstats_pkg.rs_start;
SQL> ALTER TRIGGER test_trigger_tab1_bi ENABLE;
SQL> BEGIN
2 FOR i IN 1..10000 LOOP
3 INSERT INTO test_trigger_tab1(b) VALUES ('X');
4 END LOOP;
5 END;
6 /
SQL> exec runstats_pkg.rs_middle;
SQL> ALTER TRIGGER test_trigger_tab1_bi DISABLE;
SQL> BEGIN
2 FOR i IN 1..10000 LOOP
3 INSERT INTO test_trigger_tab1(a,b) VALUES(test_trigger_seq.NEXTVAL,'X')
4 END LOOP;
5 END;
6 /
SQL> exec runstats_pkg.rs_stop;
Run1 ran in 281 cpu hsecs
Run2 ran in 172 cpu hsecs
Name Run1 Run2 Diff
STAT...recursive cpu usage 268 147 -121
STAT...recursive calls 20,641 10,594 -10,047
Trigger overhead
64. 64 of 92
Pre-12c vs. 12c
SQL> exec runstats_pkg.rs_start;
SQL> ALTER TABLE test_trigger_tab1
2 MODIFY a DEFAULT test_trigger_seq.NEXTVAL;
<<< ... 10 000 inserts ... >>>
SQL> exec runstats_pkg.rs_middle;
SQL> ALTER TABLE test_trigger_tab1 MODIFY a DEFAULT NULL;
SQL> BEGIN
2 FOR i IN 1..10000 LOOP
3 INSERT INTO test_trigger_tab1(a,b) VALUES (test_trigger_seq.NEXTVAL,'X');
5 END LOOP;
6 END;
7 /
SQL> exec runstats_pkg.rs_stop;
Run1 ran in 178 cpu hsecs
Run2 ran in 157 cpu hsecs
run 1 ran in 113.38% of the time
Name Run1 Run2 Diff
STAT...recursive cpu usage 164 136 -28
Much closer!
66. 66 of 92
INSTEAD OF Trigger Issues
DML against views with INSTEAD OF triggers behave
differently in comparison to regular DML
… especially for function-based views.
Developers often misunderstand how these triggers work
… and mishandle logical primary keys.
Proper handling of UPDATEs require deep understanding
of Oracle UNDO mechanisms.
68. 68 of 92
Test Case (1)
CREATE TYPE test_tab_ot AS OBJECT (owner_tx VARCHAR2(30),
name_tx VARCHAR2(30),
object_id NUMBER,
type_tx VARCHAR2(30));
CREATE TYPE test_tab_nt IS TABLE OF test_tab_ot;
CREATE FUNCTION f_searchTestTab_tt (i_type_tx VARCHAR2) RETURN test_tab_nt IS
v_out_tt test_tab_nt;
begin
SELECT test_tab_ot(owner, object_name, object_id, object_type)
BULK COLLECT INTO v_out_tt
FROM test_tab
WHERE object_type = i_type_tx;
dbms_output.put_line('Inside f_searchTestTab_tt:'||v_out_tt.count);
RETURN v_out_tt;
END;
69. 69 of 92
Test Case (2)
create or replace view v_search_table as
select *
from table(f_searchtesttab_tt('TABLE'));
create or replace trigger v_search_table_iiud
instead of insert or update or delete on v_search_table
begin
if inserting then
dbms_output.put_line('Insert');
elsif updating then
dbms_output.put_line('Update');
elsif deleting then
dbms_output.put_line(‘Delete');
end if;
end;
70. 70 of 92
DML Check
SQL> INSERT INTO v_search_table (object_id, name_tx)
2 VALUES (-1,'A');
Insert
1 row created.
SQL> UPDATE v_search_table SET name_tx = 'Test‘
2 WHERE object_id = 5;
Inside f_searchTestTab_tt:1541
Update
1 row updated.
SQL> DELETE FROM v_search_table WHERE object_id = 5;
Inside f_searchTestTab_tt:1541
Delete
1 row deleted.
Function is not fired!
Process all to update one?
71. 71 of 92
Side-Effect of
Function-Based Views
Real problem:
WHERE clauses on function-based views are extremely
inefficient.
Solution:
Pass conditions as parameters into the function.
72. 72 of 92
Parameterized View (1)
CREATE PACKAGE global_pkg IS
v_object_id NUMBER;
END;
CREATE FUNCTION f_searchTestTab_tt (i_type_tx varchar2) RETURN test_tab_nt is
v_out_tt test_tab_nt;
BEGIN
IF global_pkg.v_object_id IS NULL THEN
SELECT test_tab_ot(owner, object_name, object_id, object_type)
BULK COLLECT INTO v_out_tt
FROM test_tab
WHERE object_type = i_type_tx;
ELSE
SELECT test_tab_ot(owner, object_name, object_id, object_type)
BULK COLLECT INTO v_out_tt
FROM test_tab
WHERE object_id = global_pkg.v_object_id;
END IF;
dbms_output.put_line
('Inside f_searchTestTab_tt:'||v_out_tt.count);
RETURN v_out_tt;
END;
Usage (if needed)
Global parameter
73. 73 of 92
Parameterized View (2)
SQL> BEGIN global_pkg.v_object_id:=5; END;
2 /
SQL> UPDATE v_search_table SET name_tx = 'Test'
2 WHERE object_id = 5;
Inside f_searchTestTab_tt:1
Update
1 row updated.
SQL> DELETE FROM v_search_table WHERE object_id = 5;
Inside f_searchTestTab_tt:1
Delete
1 row deleted.
74. 74 of 92
Parameterized View (3)
SQL> exec runstats_pkg.rs_start;
SQL> BEGIN
2 global_pkg.v_object_id:=NULL;
3 FOR i IN 1..100 LOOP
4 UPDATE v_search_table SET name_tx = 'Test'||i
5 WHERE object_id = 5;
6 END LOOP;
7 END;
8 /
SQL> exec runstats_pkg.rs_middle;
SQL> BEGIN
2 global_pkg.v_object_id:=5;
3 FOR i IN 1..100 LOOP
4 UPDATE v_search_table SET name_tx = 'Test'||i
5 WHERE object_id = 5;
6 END LOOP;
7 END;
8 /
SQL> exec runstats_pkg.rs_stop;
Run1 ran in 181 cpu hsecs
Run2 ran in 28 cpu hsecs
run 1 ran in 646.43% of the time
5x improvement
76. 76 of 92
The Real Story
Synthetic primary keys on views
Sometimes required by front-end environments (to uniquely
identify the row in the cache)
Extremely dangerous if blindly used for
INSERT/UPDATE/DELETE
77. 77 of 92
Test Case
CREATE OR REPLACE VIEW v_test_tab
AS
SELECT 'Main|'||object_id pk_tx,
a.*
FROM test_tab_main a
UNION ALL
SELECT 'Other|'||object_id pk_tx,
a.*
FROM test_tab_other a;
80. 80 of 92
Resource Issues
Problem
INSTEAD-OF UPDATE triggers often reference all columns of the
underlying tables
… instead of trying to figure out what has been changed.
UNDO is generated for all columns mentioned in the UPDATE
statement
… irrelevant whether they were changed or not
… which causes major I/O overhead
Alternative:
Use Dynamic SQL to modify only changed columns
… i.e. trade off CPU utilization for better I/O!
81. 81 of 92
Regular Trigger
create or replace trigger v_test_tab_iu
instead of update on v_test_tab
begin
if :new.object_type='TABLE' then
update test_tab_main
set owner=:new.owner, object_name=:new.object_name,
subobject_name=:new.subobject_name,
object_type=:new.object_type,
created=:new.created, last_ddl_time=:new.last_ddl_time,
timestamp=:new.timestamp, status=:new.status,
temporary=:new.temporary, generated=:new.generated,
secondary=:new.secondary, namespace=:new.namespace,
edition_name=:new.edition_name, sharing=:new.sharing,
editionable=:new.editionable,
oracle_maintained=:new.oracle_maintained
where object_id=:old.object_id;
else
update test_tab_other
set << the same list of columns as above >>
where object_id=:old.object_id;
end if;
end;
82. 82 of 92
Dynamic SQL Trigger (1)
create or replace trigger v_test_tab_dynamic_iu
instead of update on v_test_tab
declare
v_main_rec test_tab_main%rowtype;
v_mainupdate_tx varchar2(32767);
v_other_rec test_tab_other%rowtype;
v_otherupdate_tx varchar2(32767);
begin
if :new.object_type='table' then
-- compare old/new for each attribute
if :old.object_name is null and :new.object_name is not null
or :old.object_name is not null and :new.object_name is null
or :old.object_name!=:new.object_name then
v_main_rec.object_name:=:new.object_name;
v_mainupdate_tx:=v_mainupdate_tx||
case
when v_mainupdate_tx is not null then ','
else null
end||chr(10)||
' object_name=v_rec.object_name';
end if;
...
Store new value
(if changed)
83. 83 of 92
Dynamic SQL Trigger (2)
v_main_rec.object_id:=:old.object_id;
v_mainupdate_tx:=
'declare '||chr(10)||
' v_rec test_tab_main%rowtype:=:1;'||chr(10)||
'begin '||chr(10)||
' update test_tab_main ' ||chr(10)||
' set '||v_mainupdate_tx||chr(10)||
' where object_id=v_rec.object_id;'||chr(10)||
'end;';
execute immediate v_mainupdate_tx using v_main_rec;
else
<< the same logic as above for test_tab_other >>
end if;
end;
84. 84 of 92
Performance Tradeoff
SQL> exec runstats_pkg.rs_start;
<<disable Dynamic trigger, enable regular trigger>>
SQL> begin
2 for i in 1..10000 loop
3 UPDATE v_test_tab SET object_name='Test'||i WHERE object_id = 5;
4 end loop;
5 END;
6 /
SQL> exec runstats_pkg.rs_middle;
<<enable Dynamic trigger, disable regular trigger and rerun the same logic>>
SQL> exec runstats_pkg.rs_stop;
Run1 ran in 390 cpu hsecs
Run2 ran in 496 cpu hsecs
run 1 ran in 78.63% of the time
Name Run1 Run2 Diff
STAT...recursive calls 20,329 30,244 9,915
STAT...execute count 20,102 30,095 9,993
STAT...undo change vector size 2,495,476 938,552 -1,556,924
STAT...redo size 5,820,096 2,772,332 -3,047,764
Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
136,486 184,679 48,193 73.90%
More CPU time
Much less I/O
86. 86 of 92
Extending INSTEAD OF
You can also create a COMPOUND trigger instead of an
INSTEAD OF trigger
Trigger with common program area
Impact
Pro:
Common program area – possible to share code and global variables
simulating statement-level BEFORE event
Sorry - there is no way to simulate an AFTER event
Con:
Yet another place to hide business logic
87. 87 of 92
CREATE OR REPLACE PACKAGE counter_pkg IS
v_nr NUMBER:=0;
PROCEDURE p_check;
procedure p_up;
END;
CREATE OR REPLACE PACKAGE BODY counter_pkg IS
PROCEDURE p_check is
BEGIN
dbms_output.put_line('Fired:'||counter_pkg.v_nr);
counter_pkg.v_nr:=0;
END;
procedure p_up is
begin
counter_pkg.v_nr:=counter_pkg.v_nr+1;
end;
END;
create or replace function f_securityCheck_yn return varchar2 is
begin
counter_pkg.p_up;
if sys_context ('USERENV', 'CLIENT_INFO') ='SalMaint' then
return 'Y';
else
return 'N';
end if;
end;
Monitoring Tool
88. 88 of 92
CREATE OR REPLACE TRIGGER v_search_table_IIUD
INSTEAD OF INSERT OR UPDATE OR DELETE ON v_search_table
BEGIN
if f_securityCheck_yn = 'Y' then
IF INSERTING THEN
dbms_output.put_line('Insert');
ELSIF UPDATING THEN
dbms_output.put_line('Update');
ELSIF DELETING THEN
dbms_output.put_line('Delete');
END IF;
end if;
END;
Option A - Regular
89. 89 of 92
create or replace trigger v_search_table_IIUD_comp
for insert or update or delete on v_search_table
COMPOUND TRIGGER
v_check_yn varchar2(1);
instead of each row is
begin
if v_check_yn is null then
v_check_yn:=f_securityCheck_yn;
end if;
if v_check_yn = 'Y' then
IF INSERTING THEN
dbms_output.put_line('Insert');
ELSIF UPDATING THEN
dbms_output.put_line('Update');
ELSIF DELETING THEN
dbms_output.put_line('Delete');
END IF;
end if;
end instead of each row;
end;
Option B - Compound
Common area
Row-level trigger
90. 90 of 92
SQL> alter trigger v_search_table_IIUD_COMP disable;
SQL> alter trigger v_search_table_IIUD enable;
SQL> update v_search_table set name_tx = 'Test';
Inside f_searchTestTab_tt:1571
1571 rows updated.
SQL> exec counter_pkg.p_check;
Fired:1571
SQL> alter trigger v_search_table_IIUD_COMP enable;
SQL> alter trigger v_search_table_IIUD disable;
SQL> update v_search_table set name_tx = 'Test';
Inside f_searchTestTab_tt:1571
1571 rows updated.
SQL> exec counter_pkg.p_check;
Fired:1
SQL> exec dbms_application_info.set_client_info('SalMaint');
SQL> update v_search_table set name_tx = 'Test';
Inside f_searchTestTab_tt:1571
Update
Update
....
1571 rows updated.
SQL> exec counter_pkg.p_check;
Fired:1
Performance Impact
Only one call!
Option A - Decline
Option B - Decline
Option B - Success
91. 91 of 92
Part 2: Summary
Triggers are very powerful (as long as they are applied
correctly).
“Mutating tables” issue usually hides architectural mistakes.
… and compound triggers could solve other problems too!
INSTEAD OF triggers have to be handled carefully.
92. 92 of 92
Contact Information
Michael Rosenblum – mrosenblum@dulcian.com
Dulcian, Inc. website - www.dulcian.com
Blog: wonderingmisha.blogspot.com