This is the presentation I used for Oracle Week 2016 session. This includes new features from both 12cR1 and 12cR2.
Agenda:
Developing PL/SQL:
- Composite datatypes, advanced cursors, dynamic SQL, tracing, and more…
Compiling PL/SQL:
- dependencies, optimization levels, and DBMS_WARNING
Tuning PL/SQL:
- GTT, Result cache and Memory handling
- Oracle 11g, 12cR1 and 12cR2 new useful features
- SQLcl – New replacement tool for SQL*Plus (if we have time)
Oracle Week 2015 presentation (Presented on November 15, 2015)
Agenda:
Aggregative and advanced grouping options
Analytic functions, ranking and pagination
Hierarchical and recursive queries
Oracle 12c new rows pattern matching feature
XML and JSON handling with SQL
Regular Expressions
SQLcl – a new replacement tool for SQL*Plus from Oracle
This is a presentation from Oracle Week 2016 (Israel). This is a newer version from last year with new 12cR2 features and demo.
In the agenda:
Aggregative and advanced grouping options
Analytic functions, ranking and pagination
Hierarchical and recursive queries
Regular Expressions
Oracle 12c new rows pattern matching
XML and JSON handling with SQL
Oracle 12c (12.1 + 12.2) new features
SQL Developer Command Line tool
The art of querying – newest and advanced SQL techniquesZohar Elkayam
Presentation from Oracle Week 2017.
Agenda:
Aggregative and advanced grouping options
Analytic functions, ranking and pagination
Hierarchical and recursive queries
Regular Expressions
Oracle 12c new rows pattern matching
XML and JSON handling with SQL
Oracle 12c (12.1 + 12.2) new features
SQL Developer Command Line tool (if time allows)
Oracle 18c
Is SQLcl the Next Generation of SQL*Plus?Zohar Elkayam
Session from ILOUG I presented in May, 2016
Introducing the new tool from the developers of SQL Developer: SQLcl – a new command line tool from the SQL Developer team that might replace SQL*Plus and all of its functions which has been around for over 30 years!
In this session, we will explore the new functionality of the SQLcl, and use a live demonstration to show what SQLcl has to offer over the old SQL*Plus. We will use real life example to see what makes this tool such a time saver in day-to-day tasks for DBAs and developers who prefer using the command line interface.
This is a presentation I gave at UKOUG user conference in Scotland. SQLcl is a new command line tool from the developers of SQL Developer in Oracle, This presentation is accompanied by live demo that can be downloaded from my blog.
Oracle 12c New Features For Better PerformanceZohar Elkayam
Oracle 12cR1 and 12cR2 came with some great features for better performance and scaling. In this session we will talk about some of the new features that might improve performance greatly: Optimizer changes, adaptive plans improvements, changes to statistics gathering and we'll get to know Oracle 12cR2 new sharding option
On the agenda:
- Oracle Database In Memory (Column Store)
- Oracle Sharding (12.2.0.1)
- Optimizer changes in 12c
- Statistics changes in 12c.
Presented first at ilOUG - Israel Oracle User Group meetup in February 2017.
[including promised hidden slide.. :) ]
OOW2016: Exploring Advanced SQL Techniques Using Analytic FunctionsZohar Elkayam
This is the presentation I gave on the Oracle Open World 2016 - the topic was group functions and analytic functions.
We talked about reporting analytic functions, ranking and couple of Oracle 12c new features like top-n query syntax and pattern matching.
This presentation has the bonus slides which were not presented at the event itself, as promissed
Oracle Week 2015 presentation (Presented on November 15, 2015)
Agenda:
Aggregative and advanced grouping options
Analytic functions, ranking and pagination
Hierarchical and recursive queries
Oracle 12c new rows pattern matching feature
XML and JSON handling with SQL
Regular Expressions
SQLcl – a new replacement tool for SQL*Plus from Oracle
This is a presentation from Oracle Week 2016 (Israel). This is a newer version from last year with new 12cR2 features and demo.
In the agenda:
Aggregative and advanced grouping options
Analytic functions, ranking and pagination
Hierarchical and recursive queries
Regular Expressions
Oracle 12c new rows pattern matching
XML and JSON handling with SQL
Oracle 12c (12.1 + 12.2) new features
SQL Developer Command Line tool
The art of querying – newest and advanced SQL techniquesZohar Elkayam
Presentation from Oracle Week 2017.
Agenda:
Aggregative and advanced grouping options
Analytic functions, ranking and pagination
Hierarchical and recursive queries
Regular Expressions
Oracle 12c new rows pattern matching
XML and JSON handling with SQL
Oracle 12c (12.1 + 12.2) new features
SQL Developer Command Line tool (if time allows)
Oracle 18c
Is SQLcl the Next Generation of SQL*Plus?Zohar Elkayam
Session from ILOUG I presented in May, 2016
Introducing the new tool from the developers of SQL Developer: SQLcl – a new command line tool from the SQL Developer team that might replace SQL*Plus and all of its functions which has been around for over 30 years!
In this session, we will explore the new functionality of the SQLcl, and use a live demonstration to show what SQLcl has to offer over the old SQL*Plus. We will use real life example to see what makes this tool such a time saver in day-to-day tasks for DBAs and developers who prefer using the command line interface.
This is a presentation I gave at UKOUG user conference in Scotland. SQLcl is a new command line tool from the developers of SQL Developer in Oracle, This presentation is accompanied by live demo that can be downloaded from my blog.
Oracle 12c New Features For Better PerformanceZohar Elkayam
Oracle 12cR1 and 12cR2 came with some great features for better performance and scaling. In this session we will talk about some of the new features that might improve performance greatly: Optimizer changes, adaptive plans improvements, changes to statistics gathering and we'll get to know Oracle 12cR2 new sharding option
On the agenda:
- Oracle Database In Memory (Column Store)
- Oracle Sharding (12.2.0.1)
- Optimizer changes in 12c
- Statistics changes in 12c.
Presented first at ilOUG - Israel Oracle User Group meetup in February 2017.
[including promised hidden slide.. :) ]
OOW2016: Exploring Advanced SQL Techniques Using Analytic FunctionsZohar Elkayam
This is the presentation I gave on the Oracle Open World 2016 - the topic was group functions and analytic functions.
We talked about reporting analytic functions, ranking and couple of Oracle 12c new features like top-n query syntax and pattern matching.
This presentation has the bonus slides which were not presented at the event itself, as promissed
Oracle Database In-Memory Option for ILOUGZohar Elkayam
Oracle 12.1.0.2 introduced a new feature: the Oracle In Memory Option (Databases In Memory - DBIM).
This is the presentation which was given before the ILOUG DBA SIG where I introduced the technology and how to use it.
MySQL 5.7 New Features for Developers session for DOAG (Oracle user group conference) in 2016. A similar version was also presented in Israel MySQL User Group on November 2016.
This presentation review new features in MySQL 5.7: Optimizer, InnoDB engine, JSON native data type, performance and sys schemas
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAsZohar Elkayam
Oracle Week 2017 slides.
Agenda:
Basics: How and What To Tune?
Using the Automatic Workload Repository (AWR)
Using AWR-Based Tools: ASH, ADDM
Real-Time Database Operation Monitoring (12c)
Identifying Problem SQL Statements
Using SQL Performance Analyzer
Tuning Memory (SGA and PGA)
Parallel Execution and Compression
Oracle Database 12c Performance New Features
Adding real time reporting to your database oracle db in memoryZohar Elkayam
This is a presentation I gave in the UKOUG Scotland user conference in June 2015. This is presentation describe a proof of concept we did for Clarizen on the Oracle 12c Database In Memory Option.
Exploring Oracle Multitenant in Oracle Database 12cZohar Elkayam
Oracle multi tenant architecture is one of the biggest changes in Oracle 12c. In this presentation, we will review this major change and see how it can be effective for daily use.
The agenda:
- The Multitenant Container Database Architecture
- Multitenant Benefits and Impacts
- CDB and PDB Deployments and Provisioning
- Tools and Self-service tools
This presentation is based on work of Ami Aharonovich and was adapted with his permission.
Things Every Oracle DBA Needs to Know about the Hadoop EcosystemZohar Elkayam
Session from BGOUG I presented in June, 2016
Big data is one of the biggest buzzword in today's market. Terms like Hadoop, HDFS, YARN, Sqoop, and non-structured data has been scaring DBA's since 2010 - but where does the DBA team really fit in?
In this session, we will discuss everything database administrators and database developers needs to know about big data. We will demystify the Hadoop ecosystem and explore the different components. We will learn how HDFS and MapReduce are changing the data world, and where traditional databases fits into the grand scheme of things. We will also talk about why DBAs are the perfect candidates to transition into Big Data and Hadoop professionals and experts.
Docker Concepts for Oracle/MySQL DBAs and DevOpsZohar Elkayam
Oracle Week 2017 Slides
Agenda:
Docker overview – why do we even need containers?
Installing Docker and getting started
Images and Containers
Docker Networks
Docker Storage and Volumes
Oracle and Docker
Docker tools, GUI and Swarm
Performance Schema is a powerful diagnostic instrument for:
- Query performance
- Complicated locking issues
- Memory leaks
- Resource usage
- Problematic behavior, caused by inappropriate settings
- More
It comes with hundreds of options which allow precisely tuning what to instrument. More than 100 consumers store collected data.
In this tutorial, we will try all the important instruments out. We will provide a test environment and a few typical problems which could be hardly solved without Performance Schema. You will not only learn how to collect and use this information but have experience with it.
Tutorial at Percona Live Austin 2019
An AMIS Overview of Oracle database 12c (12.1)Marco Gralike
Presentation used by Lucas Jellema and Marco Gralike during the AMIS Oracle Database 12c Launch event on Monday the 15th of July 2013 (much thanks to Tom Kyte, Oracle, for being allowed to use some of his material)
M.
Optimizer Histograms: When they Help and When Do Not?Sveta Smirnova
Talk for pre-Fosdem MySQL Day on February 1, 2019.
Last year I worked on several tickets where data follow the same pattern: millions of popular products fit into a couple of categories and rest used the rest. We had a hard time to find a solution for retrieving goods fast.
MySQL 8.0 has a feature which resolves such issues: optimizer histograms, storing statistics of an exact number of values in each data bucket.
However in real life histograms help not with all queries, accessing non-uniform data. How you write a query, the number of rows in the table, data distribution: all these may affect the use of histograms.
In this session I show examples, demonstrating how Optimizer uses histograms.
MySQL Performance Schema in Action: the Complete TutorialSveta Smirnova
Performance Schema is powerful diagnostic instrument for:
- Query performance
- Complicated locking issues
- Memory leaks
- Resource usage
- Problematic behavior, caused by inappropriate settings
- More
It comes with hundreds of options which allow precisely tune what to instrument. More than 100 consumers store collected data.
In this tutorial we will try all important instruments out. We will provide test environment and few typical problems which could be hardly solved without Performance Schema. You will not only learn how to collect and use this information, but have experience with it.
Made it on PerconaLive Frankfurt, 2018: https://www.percona.com/live/e18/sessions/mysql-performance-schema-in-action-the-complete-tutorial
Billion Goods in Few Categories: how Histograms Save a Life?Sveta Smirnova
We store data with the intention to use it: search, retrieve, group, sort... To perform these actions effectively MySQL storage engines index data and communicate statistics with the Optimizer when it compiles a query execution plan. This approach works perfectly well unless your data distribution is not even.
Last year I worked on several tickets where data follow the same pattern: millions of popular products fit into a couple of categories and rest used the rest. We had a hard time to find a solution for retrieving goods fast. Workarounds for version 5.7 were offered. However new MySQL 8.0 feature: histograms, - would work better, cleaner and faster. This is how the idea of the talk was born.
I will discuss
- how index statistics physically stored
- which data exchanged with the Optimizer
- why it is not enough to make correct index choice
In the end, I will explain which issues resolve histograms and why using index statistics is insufficient for fast retrieving of not evenly distributed data.
https://www.percona.com/live/e18/sessions/billion-goods-in-few-categories-how-histograms-save-a-life
Billion Goods in Few Categories: How Histograms Save a Life?Sveta Smirnova
We store data with an intention to use it: search, retrieve, group, sort... To do it effectively, the MySQL Optimizer uses index statistics when it compiles the query execution plan. This approach works excellently unless your data distribution is not even.
Last year I worked on several support tickets where data follows the same pattern: millions of popular products fit into a couple of categories and the rest used the rest. We had a hard time finding a solution for retrieving goods fast. We offered workarounds for version 5.7. However, a new MariaDB and MySQL 8.0 feature - histograms - would work better, cleaner and faster. The idea of the talk was born.
Of course, histograms are not a panacea and do not help in all situations.
I will discuss
- how index statistics physically stored by the storage engine
- which data exchanged with the Optimizer
- why it is not enough to make correct index choice
- when histograms can help and when they cannot
- differences between MySQL and MariaDB histograms
Talk for Percona Live 2019 Austin: https://www.percona.com/live/19/sessions/billion-goods-in-few-categories-how-histograms-save-a-life
Connect 2016-Move Your XPages Applications to the Fast LaneHoward Greenberg
Are your XPages applications performing like a Florida senior citizen driving in the left lane at 55 mph? A key to speeding up your XPages applications is knowledge of the JSF lifecycle, partial refresh and partial execution. This session will cover these concepts and then apply them to optimizing an XPages application. Learn how to use tools to measure the performance of your XPages and determine where the bottlenecks are. Several sample applications will be analyzed along with alternative programming choices to improve their performance. Learn how to dramatically increase your XPages performance and make your users happy - you might even get a speeding ticket after this session!
Ogh Ace Case, Part 1 and 2, Oracle Xml Database, Marco GralikeMarco Gralike
Presentation given in April 2009, during the kickoff of the "Oracle ACEcase" series for the Dutch Oracle Usergroup, the OGh, at the Oracle HQ, The Meern, Holland
Exploring Advanced SQL Techniques Using Analytic FunctionsZohar Elkayam
Session from ILOUG I presented in May, 2016
Even though DBAs and developers are writing SQL queries every day, it seems that advanced SQL techniques such as multi-dimension aggregation and analytic functions are still relatively remain unknown. In this session, we will explore some of the common real-world usages for analytic function, and understand how to take advantage of this great and useful tool. We will deep dive into ranking based on values and groups; understand aggregation of multiple dimensions without a group by; see how to do inter-row calculations, and much-much more…
Together we will see how we can unleash the power of analytics using Oracle 11g best practices and Oracle 12c new features.
Oracle Database In-Memory Option for ILOUGZohar Elkayam
Oracle 12.1.0.2 introduced a new feature: the Oracle In Memory Option (Databases In Memory - DBIM).
This is the presentation which was given before the ILOUG DBA SIG where I introduced the technology and how to use it.
MySQL 5.7 New Features for Developers session for DOAG (Oracle user group conference) in 2016. A similar version was also presented in Israel MySQL User Group on November 2016.
This presentation review new features in MySQL 5.7: Optimizer, InnoDB engine, JSON native data type, performance and sys schemas
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAsZohar Elkayam
Oracle Week 2017 slides.
Agenda:
Basics: How and What To Tune?
Using the Automatic Workload Repository (AWR)
Using AWR-Based Tools: ASH, ADDM
Real-Time Database Operation Monitoring (12c)
Identifying Problem SQL Statements
Using SQL Performance Analyzer
Tuning Memory (SGA and PGA)
Parallel Execution and Compression
Oracle Database 12c Performance New Features
Adding real time reporting to your database oracle db in memoryZohar Elkayam
This is a presentation I gave in the UKOUG Scotland user conference in June 2015. This is presentation describe a proof of concept we did for Clarizen on the Oracle 12c Database In Memory Option.
Exploring Oracle Multitenant in Oracle Database 12cZohar Elkayam
Oracle multi tenant architecture is one of the biggest changes in Oracle 12c. In this presentation, we will review this major change and see how it can be effective for daily use.
The agenda:
- The Multitenant Container Database Architecture
- Multitenant Benefits and Impacts
- CDB and PDB Deployments and Provisioning
- Tools and Self-service tools
This presentation is based on work of Ami Aharonovich and was adapted with his permission.
Things Every Oracle DBA Needs to Know about the Hadoop EcosystemZohar Elkayam
Session from BGOUG I presented in June, 2016
Big data is one of the biggest buzzword in today's market. Terms like Hadoop, HDFS, YARN, Sqoop, and non-structured data has been scaring DBA's since 2010 - but where does the DBA team really fit in?
In this session, we will discuss everything database administrators and database developers needs to know about big data. We will demystify the Hadoop ecosystem and explore the different components. We will learn how HDFS and MapReduce are changing the data world, and where traditional databases fits into the grand scheme of things. We will also talk about why DBAs are the perfect candidates to transition into Big Data and Hadoop professionals and experts.
Docker Concepts for Oracle/MySQL DBAs and DevOpsZohar Elkayam
Oracle Week 2017 Slides
Agenda:
Docker overview – why do we even need containers?
Installing Docker and getting started
Images and Containers
Docker Networks
Docker Storage and Volumes
Oracle and Docker
Docker tools, GUI and Swarm
Performance Schema is a powerful diagnostic instrument for:
- Query performance
- Complicated locking issues
- Memory leaks
- Resource usage
- Problematic behavior, caused by inappropriate settings
- More
It comes with hundreds of options which allow precisely tuning what to instrument. More than 100 consumers store collected data.
In this tutorial, we will try all the important instruments out. We will provide a test environment and a few typical problems which could be hardly solved without Performance Schema. You will not only learn how to collect and use this information but have experience with it.
Tutorial at Percona Live Austin 2019
An AMIS Overview of Oracle database 12c (12.1)Marco Gralike
Presentation used by Lucas Jellema and Marco Gralike during the AMIS Oracle Database 12c Launch event on Monday the 15th of July 2013 (much thanks to Tom Kyte, Oracle, for being allowed to use some of his material)
M.
Optimizer Histograms: When they Help and When Do Not?Sveta Smirnova
Talk for pre-Fosdem MySQL Day on February 1, 2019.
Last year I worked on several tickets where data follow the same pattern: millions of popular products fit into a couple of categories and rest used the rest. We had a hard time to find a solution for retrieving goods fast.
MySQL 8.0 has a feature which resolves such issues: optimizer histograms, storing statistics of an exact number of values in each data bucket.
However in real life histograms help not with all queries, accessing non-uniform data. How you write a query, the number of rows in the table, data distribution: all these may affect the use of histograms.
In this session I show examples, demonstrating how Optimizer uses histograms.
MySQL Performance Schema in Action: the Complete TutorialSveta Smirnova
Performance Schema is powerful diagnostic instrument for:
- Query performance
- Complicated locking issues
- Memory leaks
- Resource usage
- Problematic behavior, caused by inappropriate settings
- More
It comes with hundreds of options which allow precisely tune what to instrument. More than 100 consumers store collected data.
In this tutorial we will try all important instruments out. We will provide test environment and few typical problems which could be hardly solved without Performance Schema. You will not only learn how to collect and use this information, but have experience with it.
Made it on PerconaLive Frankfurt, 2018: https://www.percona.com/live/e18/sessions/mysql-performance-schema-in-action-the-complete-tutorial
Billion Goods in Few Categories: how Histograms Save a Life?Sveta Smirnova
We store data with the intention to use it: search, retrieve, group, sort... To perform these actions effectively MySQL storage engines index data and communicate statistics with the Optimizer when it compiles a query execution plan. This approach works perfectly well unless your data distribution is not even.
Last year I worked on several tickets where data follow the same pattern: millions of popular products fit into a couple of categories and rest used the rest. We had a hard time to find a solution for retrieving goods fast. Workarounds for version 5.7 were offered. However new MySQL 8.0 feature: histograms, - would work better, cleaner and faster. This is how the idea of the talk was born.
I will discuss
- how index statistics physically stored
- which data exchanged with the Optimizer
- why it is not enough to make correct index choice
In the end, I will explain which issues resolve histograms and why using index statistics is insufficient for fast retrieving of not evenly distributed data.
https://www.percona.com/live/e18/sessions/billion-goods-in-few-categories-how-histograms-save-a-life
Billion Goods in Few Categories: How Histograms Save a Life?Sveta Smirnova
We store data with an intention to use it: search, retrieve, group, sort... To do it effectively, the MySQL Optimizer uses index statistics when it compiles the query execution plan. This approach works excellently unless your data distribution is not even.
Last year I worked on several support tickets where data follows the same pattern: millions of popular products fit into a couple of categories and the rest used the rest. We had a hard time finding a solution for retrieving goods fast. We offered workarounds for version 5.7. However, a new MariaDB and MySQL 8.0 feature - histograms - would work better, cleaner and faster. The idea of the talk was born.
Of course, histograms are not a panacea and do not help in all situations.
I will discuss
- how index statistics physically stored by the storage engine
- which data exchanged with the Optimizer
- why it is not enough to make correct index choice
- when histograms can help and when they cannot
- differences between MySQL and MariaDB histograms
Talk for Percona Live 2019 Austin: https://www.percona.com/live/19/sessions/billion-goods-in-few-categories-how-histograms-save-a-life
Connect 2016-Move Your XPages Applications to the Fast LaneHoward Greenberg
Are your XPages applications performing like a Florida senior citizen driving in the left lane at 55 mph? A key to speeding up your XPages applications is knowledge of the JSF lifecycle, partial refresh and partial execution. This session will cover these concepts and then apply them to optimizing an XPages application. Learn how to use tools to measure the performance of your XPages and determine where the bottlenecks are. Several sample applications will be analyzed along with alternative programming choices to improve their performance. Learn how to dramatically increase your XPages performance and make your users happy - you might even get a speeding ticket after this session!
Ogh Ace Case, Part 1 and 2, Oracle Xml Database, Marco GralikeMarco Gralike
Presentation given in April 2009, during the kickoff of the "Oracle ACEcase" series for the Dutch Oracle Usergroup, the OGh, at the Oracle HQ, The Meern, Holland
Exploring Advanced SQL Techniques Using Analytic FunctionsZohar Elkayam
Session from ILOUG I presented in May, 2016
Even though DBAs and developers are writing SQL queries every day, it seems that advanced SQL techniques such as multi-dimension aggregation and analytic functions are still relatively remain unknown. In this session, we will explore some of the common real-world usages for analytic function, and understand how to take advantage of this great and useful tool. We will deep dive into ranking based on values and groups; understand aggregation of multiple dimensions without a group by; see how to do inter-row calculations, and much-much more…
Together we will see how we can unleash the power of analytics using Oracle 11g best practices and Oracle 12c new features.
Rapid Cluster Computing with Apache Spark 2016Zohar Elkayam
This is the presentation I used for Oracle Week 2016 session about Apache Spark.
In the agenda:
- The Big Data problem and possible solutions
- Basic Spark Core
- Working with RDDs
- Working with Spark Cluster and Parallel programming
- Spark modules: Spark SQL and Spark Streaming
- Performance and Troubleshooting
Advanced PLSQL Optimizing for Better PerformanceZohar Elkayam
A Presentation from Oracle Week 2015 in Israel
Agenda:
• Developing PL/SQL:
o Composite Data Types: Records, Collections and Table type
o Advanced Cursors: Ref cursor, Cursor function, Cursor subquery in PL/SQL
o Bulk Binding
o Dynamic SQL – SQL Injection
o Tracing PL/SQL Execution
o Design patterns for PL/SQL: Autonomous Transactions, Invoker and Definer rights, serially_reusable code
o Triggers Improvements
• Compiling PL/SQL:
o PL/SQL Fine-Grain Dependency Management
o PLSQL_OPTIMIZE_LEVEL parameter
o PL/SQL Compile-Time Warnings and Using DBMS_WARNING package
• Tuning PL/SQL:
o Handling Packages in Memory
o Global Temporary Tables
o PL/SQL Function Result Cache and pitfalls
• Oracle Database 12c PL/SQL new features: What is new in Oracle 12c
o Language Usability Enhancements
o New Limitations
• Additional useful features, Tips and Tricks for better performance
This is a recording of my Advanced Oracle Troubleshooting seminar preparation session - where I showed how I set up my command line environment and some of the main performance scripts I use!
The Hadoop Ecosystem for developers session in DevGeekWeek in Israel.
This was a day long session talking about big data problems and the hadoop solution. we also talked about Spark and NoSQL.
Modern Linux Performance Tools for Application TroubleshootingTanel Poder
Modern Linux Performance Tools for Application Troubleshooting.
Mostly demos and focused on application/process troubleshooting, not systemwide summaries.
With help of this small Proof of Concept, I have tried to demonstrate the usage of Neo4J (Graph DB) as a metastore for a Data Lake or a DW. Graph DBs can store highly relational data and help us in doing data discovery and impact analysis, which bit more complex to bee done in an RDBMS.
Postgres has the unique ability to act as a powerful data aggregator in many data centers. This presentation shows how Postgres's extensibility, access to foreign data sources, and ability handle NoSQL-like and data warehousing workloads gives it unmatched capabilities to function in this role.
UiPath Studio Web workshop series - Day 2DianaGray10
📣 Welcome to Day 2 of the UiPath Studio Web Workshop, we will delve into advanced techniques for array analysis and data categorization using UiPath Studio Web. Join us as we explore how to streamline your automation processes, enabling you to efficiently organize and categorize data. In this session, we will also elevate your data handling skills with UiPath Studio Web. Learn to generate data tables from text files and perform advanced string operations such as concatenation of first, middle, and last names. from text files and performing advanced string operations. Join us as we delve into practical exercises to enhance your proficiency in data.
👉 Topics covered:
📌 Task 1: Array Extremes
Find Max and Min Values from List of Array of Numbers
📌 Task 2: Data Segregation by Switch activity
Utilize "Switch" Activity for Data Categorization
Organize Data into Distinct Excel Sheets by Departments
📌Task 3: Generating Data Tables
Overview of Generating Data Tables from Text Files
Practical Exercise: Generate Data Table from a Sample Text File
📌 Task 4: Concatenating Names in Excel
Extracting First Name, Middle Name, and Last Name from Excel
Concatenating Names and Writing Results into Full Name Column
Speakers:
Vajrang Billlakurthi, Digital Transformation Leader, Vajrang IT Services Pvt Ltd. and UiPath MVP
Swathi Nelakurthi, Associate Automation Developer, Vajrang IT Services Pvt Ltd
Rahul Goyal, SR. Director, ERP Systems, Ellucian and UiPath MVP
Nearly every application uses some sort of data storage. Proper data structure can lead to increased performance, reduced application complexity, and ensure data integrity. Foreign keys, indexes, and correct data types truly are your best friends when you respect them and use them for the correct purposes. Structuring data to be normalized and with the correct data types can lead to significant performance increases. Learn how to structure your tables to achieve normalization, performance, and integrity, by building a database from the ground up during this tutorial.
AMIS organiseerde op maandagavond 15 juli het seminar ‘Oracle database 12c revealed’. Deze avond bood AMIS Oracle professionals de eerste mogelijkheid om de vernieuwingen in Oracle database 12c in actie te zien! De AMIS specialisten die meer dan een jaar bèta testen hebben uitgevoerd lieten zien wat er nieuw is en hoe we dat de komende jaren gaan inzetten!
Deze presentatie is deze avond gegeven in de vorm van een parallelsessie.
Oracle Advanced SQL and Analytic FunctionsZohar Elkayam
Even though DBAs and developers are writing SQL queries every day, it seems that advanced SQL techniques such as multidimension aggregation and analytic functions still remain relatively unknown. In this session, we will explore some of the common real-world usages for analytic function and understand how to take advantage of this great and useful tool. We will deep dive into ranking based on values and groups, understand aggregation of multiple dimensions without a group by, see how to do inter-row calculations, and much more.
This is the presentation slides which was presented in Kscope 17 on June 28, 2017.
Things Every Oracle DBA Needs to Know About the Hadoop Ecosystem 20170527Zohar Elkayam
Big data is one of the biggest buzzwords in today's market. Terms such as Hadoop, HDFS, YARN, Sqoop, and non-structured data have been scaring DBAs since 2010, but where does the DBA team really fit in?
In this session, we will discuss everything database administrators and database developers need to know about big data. We will demystify the Hadoop ecosystem and explore the different components. We will learn how HDFS and MapReduce are changing the data world and where traditional databases fit into the grand scheme of things. We will also talk about why DBAs are the perfect candidates to transition into big data and Hadoop professionals and experts.
This is the presentation I gave in Kscope17, on June 27, 2017.
Things Every Oracle DBA Needs to Know About the Hadoop Ecosystem (c17lv version)Zohar Elkayam
Big data is one of the biggest buzzword in today's market. Terms like Hadoop, HDFS, YARN, Sqoop, and non-structured data has been scaring DBA's since 2010 - but where does the DBA team really fit in?
In this session, we will discuss everything database administrators and database developers needs to know about big data. We will demystify the Hadoop ecosystem and explore the different components. We will learn how HDFS and MapReduce are changing the data world, and where traditional databases fits into the grand scheme of things. We will also talk about why DBAs are the perfect candidates to transition into Big Data and Hadoop professionals and experts.
Learning Objective #1: What is the Big Data challenge
Learning Objective #2: Learn about Hadoop - HDFS, MapReduce and Yarn
Learning Objective #3: Understand where a DBA fits in this world
Introduction to Oracle Data Guard BrokerZohar Elkayam
This is an old deck I recently renewed for a customer session. This is the introduction to Oracle Data Guard broker feature, how to deploy it, how to use it and what are its benefits.
This presentation is based on version 11g but most of it is also compatible to Oracle 12c,
Agenda:
- Oracle Data Guard overview
- Dataguard broker introduction
- Configuring and using the data guard
- Live Demos
Things Every Oracle DBA Needs To Know About The Hadoop EcosystemZohar Elkayam
This is a presentation which was presented in multiple forums (in one way or the other). This is a short introduction for Oracle personal (DBAs and DB Developers) for Big Data and the Hadoop Ecosystem.
In the agenda:
• What is the Big Data challenge?
• A Big Data Solution: Apache Hadoop
• HDFS
• MapReduce and YARN
• Hadoop Ecosystem: HBase, Sqoop, Hive, Pig and other tools
• Another Big Data Solution: Apache Spark
• Where does the DBA fits in?
This presentation was presented in DOAG 2016, HROUG 2016, BGOUG 2016, ILOUG Tech Days 2016 and other small private sessions (Israel Technology Police leaders, CIO forum, Amdocs, and others).
Exploring Advanced SQL Techniques Using Analytic FunctionsZohar Elkayam
Session from BGOUG I presented in June, 2016
Even though DBAs and developers are writing SQL queries every day, it seems that advanced SQL techniques such as multi-dimension aggregation and analytic functions are still relatively remain unknown. In this session, we will explore some of the common real-world usages for analytic function, and understand how to take advantage of this great and useful tool. We will deep dive into ranking based on values and groups; understand aggregation of multiple dimensions without a group by; see how to do inter-row calculations, and much-much more…
Together we will see how we can unleash the power of analytics using Oracle 11g best practices and Oracle 12c new features.
Introduction to Big Data and NoSQL.
This presentation was given to the Master DBA course at John Bryce Education in Israel.
Work is based on presentations by Michael Naumov, Baruch Osoveskiy, Bill Graham and Ronen Fidel.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
2. 2
Who am I?
• Zohar Elkayam, CTO at Brillix
• Programmer, DBA, team leader, database trainer,
public speaker, and a senior consultant for over 18
years
• Oracle ACE Associate
• Part of ilOUG – Israel Oracle User Group
• Blogger – www.realdbamagic.com and
www.ilDBA.co.il
3. 3
About Brillix
• We offer complete, integrated end-to-end solutions based on
best-of-breed innovations in database, security and big data
technologies
• We provide complete end-to-end 24x7 expert remote
database services
• We offer professional customized on-site trainings, delivered
by our top-notch world recognized instructors
5. 5
Agenda
• Developing PL/SQL:
– Composite datatypes, advanced cursors, dynamic SQL,
tracing, and more…
• Compiling PL/SQL:
– dependencies, optimization levels, and DBMS_WARNING
• Tuning PL/SQL:
– GTT, Result cache and Memory handling
• Oracle 11g, 12cR1 and 12cR2 new useful features
• SQLcl – New replacement tool for SQL*Plus (if we
have time)
5
6. 6
Our Goal Today
• Learning new and old PL/SQL techniques
• We will not expert everything
• Getting to know new features (12cR1 and 12cR2)
• This is a starting point – don’t be afraid to try
9. 9
Developing PL/SQL
• Composite datatypes
• Advanced Cursors and Bulk operations
• Dynamic SQL and SQL Injection
• Autonomous Transactions
• 11g mentionable features
• 12cR1 new development features
• 12cR2 new features
9
11. 11
Composite Datatypes
• Collections
– Nested table, varray
– Associative arrays/PLSQL tables
• Use collections methods
• Manipulate collections
• Distinguish between the different types of
collections and when to use them
12. 12
Understanding Collections
• A collection is a group of elements, all of the
same type.
• Collections work like arrays.
• Collections can store instances of an object type and,
conversely, can be attributes of an object type.
• Types of collections in PL/SQL:
– Associative arrays
• String-indexed collections
• INDEX BY pls_integer or BINARY_INTEGER
– Nested tables
– Varrays
14. 15
Using Associative Arrays
Associative arrays:
• That are indexed by strings can improve
performance
• Are pure memory structures that are much faster
than schema-level tables
• Provide significant additional flexibility
Associative arrays
1 2 3 4 5 6 a f i o t w
Index by
PLS_INTEGER
Index by
VARCHAR2
15. 17
Creating the Array
Associative array in PL/SQL (string-indexed):
TYPE type_name IS TABLE OF element_type
INDEX BY VARCHAR2(size)
CREATE OR REPLACE PROCEDURE report_credit
(p_last_name customers.cust_last_name%TYPE,
p_credit_limit customers.credit_limit%TYPE)
IS
TYPE typ_name IS TABLE OF customers%ROWTYPE
INDEX BY customers.cust_email%TYPE;
v_by_cust_email typ_name;
i VARCHAR2(30);
PROCEDURE load_arrays IS
BEGIN
FOR rec IN (SELECT * FROM customers WHERE cust_email IS NOT NULL)
LOOP
-- Load up the array in single pass to database table.
v_by_cust_email (rec.cust_email) := rec;
END LOOP;
END;
...
Create the string-indexed
associative array type.
Create the string-indexed
associative array variable.
Populate the string-indexed
associative array variable.
16. 18
Traversing the Array
...
BEGIN
load_arrays;
i:= v_by_cust_email.FIRST;
dbms_output.put_line ('For credit amount of: ' || p_credit_limit);
WHILE i IS NOT NULL LOOP
IF v_by_cust_email(i).cust_last_name = p_last_name
AND v_by_cust_email(i).credit_limit > p_credit_limit
THEN dbms_output.put_line ( 'Customer '||
v_by_cust_email(i).cust_last_name || ': ' ||
v_by_cust_email(i).cust_email || ' has credit limit of: ' ||
v_by_cust_email(i).credit_limit);
END IF;
i := v_by_cust_email.NEXT(i);
END LOOP;
END report_credit;
/
EXECUTE report_credit('Walken', 1200)
For credit amount of: 1200
Customer Walken: Emmet.Walken@LIMPKIN.COM has credit limit of: 3600
Customer Walken: Prem.Walken@BRANT.COM has credit limit of: 3700
17. 20
Using Nested Tables
Nested table characteristics:
– A table within a table
– Unbounded
– Available in both SQL and
PL/SQL as well as the
database
– Array-like access to
individual rows
Nested table:
19. 22
Creating Nested Tables
To create a nested table in the database:
To create a nested table in PL/SQL:
CREATE [OR REPLACE] TYPE type_name AS TABLE OF
Element_datatype [NOT NULL];
TYPE type_name IS TABLE OF element_datatype [NOT NULL];
20. 23
Declaring Collections: Nested Table
– First, define an object type:
– Second, declare a column of that collection type:
CREATE TYPE typ_item AS OBJECT --create object
(prodid NUMBER(5),
price NUMBER(7,2) )
/
CREATE TYPE typ_item_nst -- define nested table type
AS TABLE OF typ_item
/
CREATE TABLE pOrder ( -- create database table
ordid NUMBER(5),
supplier NUMBER(5),
requester NUMBER(4),
ordered DATE,
items typ_item_nst)
NESTED TABLE items STORE AS item_stor_tab
/
1
2
3
23. 26
Referencing Collection Elements
Use the collection name and a subscript to reference a
collection element:
– Syntax:
– Example:
– To reference a field in a collection:
collection_name(subscript)
v_with_discount(i)
p_new_items(i).prodid
24. 27
Using Nested Tables in PL/SQL
CREATE OR REPLACE PROCEDURE add_order_items
(p_ordid NUMBER, p_new_items typ_item_nst)
IS
v_num_items NUMBER;
v_with_discount typ_item_nst;
BEGIN
v_num_items := p_new_items.COUNT;
v_with_discount := p_new_items;
IF v_num_items > 2 THEN
--ordering more than 2 items gives a 5% discount
FOR i IN 1..v_num_items LOOP
v_with_discount(i) :=
typ_item(p_new_items(i).prodid,
p_new_items(i).price*.95);
END LOOP;
END IF;
UPDATE pOrder
SET items = v_with_discount
WHERE ordid = p_ordid;
END;
25. 28
Using Nested Tables in PL/SQL
-- caller pgm:
DECLARE
v_form_items typ_item_nst:= typ_item_nst();
BEGIN
-- let's say the form holds 4 items
v_form_items.EXTEND(4);
v_form_items(1) := typ_item(1804, 65);
v_form_items(2) := typ_item(3172, 42);
v_form_items(3) := typ_item(3337, 800);
v_form_items(4) := typ_item(2144, 14);
add_order_items(800, v_form_items);
END;
PRODID PRICE
1804 65
3172 42
3337 800
2144 14
v_form_items variable
ORDID SUPPLIER REQUESTER ORDERED ITEMS
500 50 5000 30-OCT-07
800 80 8000 31-OCT-07
Resulting data in the pOrder nested table
PRODID PRICE
1804 61.75
3172 39.9
3337 760
2144 13.3
The prices are added
after discounts.
26. 29
Understanding Varrays
• To create a varray in the database:
• To create a varray in PL/SQL:
Varray:
CREATE [OR REPLACE] TYPE type_name AS VARRAY
(max_elements) OF element_datatype [NOT NULL];
TYPE type_name IS VARRAY (max_elements) OF
element_datatype [NOT NULL];
27. 30
CREATE TABLE department ( -- create database table
dept_id NUMBER(2),
name VARCHAR2(25),
budget NUMBER(12,2),
projects typ_ProjectList) -- declare varray as column
/
Declaring Collections: Varray
• First, define a collection type:
• Second, declare a collection of that type:
CREATE TYPE typ_Project AS OBJECT( --create object
project_no NUMBER(4),
title VARCHAR2(35),
cost NUMBER(12,2))
/
CREATE TYPE typ_ProjectList AS VARRAY (50) OF typ_Project
-- define VARRAY type
/
1
2
3
28. 31
Using Varrays
Add data to the table containing a varray column:
INSERT INTO department
VALUES (10, 'Exec Admn', 30000000,
typ_ProjectList(
typ_Project(1001, 'Travel Monitor', 400000),
typ_Project(1002, 'Open World', 10000000)));
INSERT INTO department
VALUES (20, 'IT', 5000000,
typ_ProjectList(
typ_Project(2001, 'DB11gR2', 900000)));
1
2
DEPT_ID NAME BUDGET PROJECTS
PROJECT_NO TITLE COSTS
10 Exec Admn 30000000 1001 Travel Monitor 400000
1002 Open World 10000000
20 IT 5000000 2001 DB11gR2 900000
1
2
DEPARTMENT table
29. 32
– Querying the results:
– Querying the results with the TABLE function:
Using Varrays
SELECT * FROM department;
DEPT_ID NAME BUDGET
---------- ------------------------- ----------
PROJECTS(PROJECT_NO, TITLE, COST)
-----------------------------------------------------------------
10 Executive Administration 30000000
TYP_PROJECTLIST(TYP_PROJECT(1001, 'Travel Monitor', 400000),
TYP_PROJECT(1002, 'Open World', 10000000))
20 Information Technology 5000000
TYP_PROJECTLIST(TYP_PROJECT(2001, 'DB11gR2', 900000))
SELECT d2.dept_id, d2.name, d1.*
FROM department d2, TABLE(d2.projects) d1;
DEPT_ID NAME PROJECT_NO TITLE COST
------- ------------------------ ---------- -------------- --------
10 Executive Administration 1001 Travel Monitor 400000
10 Executive Administration 1002 Open World 10000000
20 Information Technology 2001 DB11gR2 900000
30. 33
Working with Collections in PL/SQL
• You can declare collections as the formal parameters of procedures
and functions.
• You can specify a collection type in the RETURN clause of a
function specification.
• Collections follow the usual scoping and instantiation rules.
CREATE OR REPLACE PACKAGE manage_dept_proj
AS
PROCEDURE allocate_new_proj_list
(p_dept_id NUMBER, p_name VARCHAR2, p_budget NUMBER);
FUNCTION get_dept_project (p_dept_id NUMBER)
RETURN typ_projectlist;
PROCEDURE update_a_project
(p_deptno NUMBER, p_new_project typ_Project,
p_position NUMBER);
FUNCTION manipulate_project (p_dept_id NUMBER)
RETURN typ_projectlist;
FUNCTION check_costs (p_project_list typ_projectlist)
RETURN boolean;
END manage_dept_proj;
31. 36
Initializing Collections
Three ways to initialize:
– Use a constructor.
– Fetch from the database.
– Assign another collection variable directly.
PROCEDURE allocate_new_proj_list
(p_dept_id NUMBER, p_name VARCHAR2, p_budget NUMBER)
IS
v_accounting_project typ_projectlist;
BEGIN
-- this example uses a constructor
v_accounting_project :=
typ_ProjectList
(typ_Project (1, 'Dsgn New Expense Rpt', 3250),
typ_Project (2, 'Outsource Payroll', 12350),
typ_Project (3, 'Audit Accounts Payable',1425));
INSERT INTO department
VALUES(p_dept_id, p_name, p_budget, v_accounting_project);
END allocate_new_proj_list;
32. 37
FUNCTION get_dept_project (p_dept_id NUMBER)
RETURN typ_projectlist
IS
v_accounting_project typ_projectlist;
BEGIN -- this example uses a fetch from the database
SELECT projects INTO v_accounting_project
FROM department WHERE dept_id = p_dept_id;
RETURN v_accounting_project;
END get_dept_project;
Initializing Collections
FUNCTION manipulate_project (p_dept_id NUMBER)
RETURN typ_projectlist
IS
v_accounting_project typ_projectlist;
v_changed_list typ_projectlist;
BEGIN
SELECT projects INTO v_accounting_project
FROM department WHERE dept_id = p_dept_id;
-- this example assigns one collection to another
v_changed_list := v_accounting_project;
RETURN v_changed_list;
END manipulate_project;
1
2
33. 38
Referencing Collection Elements
-- sample caller program to the manipulate_project function
DECLARE
v_result_list typ_projectlist;
BEGIN
v_result_list := manage_dept_proj.manipulate_project(10);
FOR i IN 1..v_result_list.COUNT LOOP
dbms_output.put_line('Project #: '
||v_result_list(i).project_no);
dbms_output.put_line('Title: '||v_result_list(i).title);
dbms_output.put_line('Cost: ' ||v_result_list(i).cost);
END LOOP;
END;
Project #: 1001
Title: Travel Monitor
Cost: 400000
Project #: 1002
Title: Open World
Cost: 10000000
34. 39
Using Collection Methods
• EXISTS
• COUNT
• LIMIT
• FIRST and LAST
• PRIOR and NEXT
• EXTEND
• TRIM
• DELETE
collection_name.method_name [(parameters)]
35. 41
FUNCTION check_costs (p_project_list typ_projectlist)
RETURN boolean
IS
c_max_allowed NUMBER := 10000000;
i INTEGER;
v_flag BOOLEAN := FALSE;
BEGIN
i := p_project_list.FIRST ;
WHILE i IS NOT NULL LOOP
IF p_project_list(i).cost > c_max_allowed then
v_flag := TRUE;
dbms_output.put_line (p_project_list(i).title || '
exceeded allowable budget.');
RETURN TRUE;
END IF;
i := p_project_list.NEXT(i);
END LOOP;
RETURN null;
END check_costs;
Using Collection Methods
Traverse collections with the following methods:
36. 42
Using Collection Methods
-- sample caller program to check_costs
set serverout on
DECLARE
v_project_list typ_projectlist;
BEGIN
v_project_list := typ_ProjectList(
typ_Project (1,'Dsgn New Expense Rpt', 3250),
typ_Project (2, 'Outsource Payroll', 120000),
typ_Project (3, 'Audit Accounts Payable',14250000));
IF manage_dept_proj.check_costs(v_project_list) THEN
dbms_output.put_line('Project rejected: overbudget');
ELSE
dbms_output.put_line('Project accepted, fill out forms.');
END IF;
END;
Audit Accounts Payable exceeded allowable budget.
Project rejected: overbudget
PROJECT_NO TITLE COSTS
1 Dsgn New Expense Rpt 3250
2 Outsource Payroll 120000
3 Audit Accounts Payable 14250000
V_PROJECT_LIST variable:
37. 43
PROCEDURE update_a_project
(p_deptno NUMBER, p_new_project typ_Project, p_position NUMBER)
IS
v_my_projects typ_ProjectList;
BEGIN
v_my_projects := get_dept_project (p_deptno);
v_my_projects.EXTEND; --make room for new project
/* Move varray elements forward */
FOR i IN REVERSE p_position..v_my_projects.LAST - 1 LOOP
v_my_projects(i + 1) := v_my_projects(i);
END LOOP;
v_my_projects(p_position) := p_new_project; -- insert new one
UPDATE department SET projects = v_my_projects
WHERE dept_id = p_deptno;
END update_a_project;
Manipulating Individual Elements
38. 44
Manipulating Individual Elements
-- check the table prior to the update:
SELECT d2.dept_id, d2.name, d1.*
FROM department d2, TABLE(d2.projects) d1;
DEPT_ID NAME PROJECT_NO TITLE COST
------- ------------------------- ---------- ----------------------------- --
10 Executive Administration 1001 Travel Monitor 400000
10 Executive Administration 1002 Open World 10000000
20 Information Technology 2001 DB11gR2 900000
-- caller program to update_a_project
BEGIN
manage_dept_proj.update_a_project(20,
typ_Project(2002, 'AQM', 80000), 2);
END;
DEPT_ID NAME PROJECT_NO TITLE COST
------- ------------------------- ---------- ----------------------------- --
10 Executive Administration 1001 Travel Monitor 400000
10 Executive Administration 1002 Open World 10000000
20 Information Technology 2001 DB11gR2 900000
20 Information Technology 2002 AQM 80000
-- check the table after the update:
SELECT d2.dept_id, d2.name, d1.*
FROM department d2, TABLE(d2.projects) d1;
39. 45
Listing Characteristics for Collections
PL/SQL
Nested
Tables
DB
Nested
Tables
PL/SQL
Varrays
DB
Varrays
PL/SQL
Associative
Arrays
Maximum
size
No No Yes Yes Dynamic
Sparsity Can be No Dense Dense Yes
Storage N/A Stored out-of-
line
N/A Stored inline
(if < 4,000
bytes)
N/A
Ordering Does not
retain ordering
and subscripts
Does not
retain ordering
and subscripts
Retains
ordering and
subscripts
Retains
ordering and
subscripts
Retains
ordering and
subscripts
40. 46
Guidelines for Using Collections Effectively
• Varrays involve fewer disk accesses and are more efficient.
• Use nested tables for storing large amounts of data.
• Use varrays to preserve the order of elements in the
collection column.
• If you do not have a requirement to delete elements in the
middle of a collection, favor varrays.
• Varrays do not allow piecewise updates.
• After deleting the elements, release the unused memory
with DBMS_SESSION.FREE_UNUSED_USER_MEMORY
42. 48
Cursor Design: Use Records
• Fetch into a record when fetching from a cursor.
• Benefit
– No individual variables declaration is needed.
– You can automatically use the structure of the SELECT column
list.
DECLARE
CURSOR cur_cust IS
SELECT customer_id, cust_last_name, cust_email
FROM customers WHERE credit_limit = 1200;
v_cust_record cur_cust%ROWTYPE;
BEGIN
OPEN cur_cust;
LOOP
FETCH cur_cust INTO v_cust_record;
...
43. 49
Guidelines for Good Cursor Design
• Reference implicit cursor attributes immediately after the SQL
statement executes.
• Benefit
– Doing so ensures that you are dealing with the result of the
correct SQL statement.
BEGIN
UPDATE customers
SET credit_limit = p_credit_limit
WHERE customer_id = p_cust_id;
get_avg_order(p_cust_id); -- procedure call
IF SQL%NOTFOUND THEN
...
``
44. 50
Use Cursor Parameters
• Create cursors with parameters.
• Benefit
– Parameters increase the cursor’s flexibility and reusability.
– Parameters help avoid scoping problems.
CURSOR cur_cust
(p_crd_limit NUMBER, p_acct_mgr NUMBER)
IS
SELECT customer_id, cust_last_name, cust_email
FROM customers
WHERE credit_limit = p_crd_limit
AND account_mgr_id = p_acct_mgr;
BEGIN
OPEN cur_cust(p_crd_limit_in, p_acct_mgr_in);
...
CLOSE cur_cust;
...
OPEN cur_cust(v_credit_limit, 145);
...
END;
45. 51
Using the Cursor For Loop
• Simplify coding with cursor FOR loops.
• Benefit
– Reduces the volume of code
– Automatically handles the open, fetch, and close operations, and defines a record type
that matches the cursor definition
CREATE OR REPLACE PROCEDURE cust_pack
(p_crd_limit_in NUMBER, p_acct_mgr_in NUMBER)
IS
v_credit_limit NUMBER := 1500;
CURSOR cur_cust
(p_crd_limit NUMBER, p_acct_mgr NUMBER)
IS
SELECT customer_id, cust_last_name, cust_email
FROM customers WHERE credit_limit = p_crd_limit
AND account_mgr_id = p_acct_mgr;
BEGIN
FOR cur_rec IN cur_cust (p_crd_limit_in, p_acct_mgr_in)
LOOP -- implicit open and fetch
...
END LOOP; -- implicit close
...
END;
46. 52
CREATE OR REPLACE PROCEDURE cust_list
IS
CURSOR cur_cust IS
SELECT customer_id, cust_last_name, credit_limit*1.1
FROM customers;
cust_record cur_cust%ROWTYPE;
BEGIN
OPEN cur_cust;
LOOP
FETCH cur_cust INTO cust_record;
DBMS_OUTPUT.PUT_LINE('Customer ' ||
cust_record.cust_last_name || ' wants credit '
|| cust_record.(credit_limit * 1.1));
EXIT WHEN cur_cust%NOTFOUND;
END LOOP;
...
More Guidelines for Good Cursor Design
• Make a DBA happy: Close a cursor when it is no longer needed.
• Use column aliases in cursors for calculated columns fetched into
records declared with %ROWTYPE.
Use col. alias
47. 53
Returning Result Sets From PL/SQL
• A Ref Cursor is a Cursor variable
• It hold a pointer to the result set of a previously
opened cursor
• The actual SQL statement of the cursor is dynamic
and determined at execution time
• A single Ref Cursor can point to different result
sets at different times
53
48. 54
Memory
Cursor Variables: Overview
1 Southlake, Texas 1400
2 San Francisco 1500
3 New Jersey 1600
4 Seattle, Washington 1700
5 Toronto 1800
REF
CURSOR
memory
locator
49. 55
Working with Cursor Variables
Define and
declare the
cursor
variable.
Open the
cursor
variable.
Fetch rows
from the
result set.
Close the
cursor
variable.
1 2 3 4
50. 56
Strong Versus Weak REF CURSOR Variables
• Strong REF CURSOR:
– Is restrictive
– Specifies a RETURN type
– Associates only with type-compatible queries
– Is less error prone
• Weak REF CURSOR:
– Is nonrestrictive
– Associates with any query
– Is very flexible
51. 57
DECLARE
TYPE rt_cust IS REF CURSOR
RETURN customers%ROWTYPE;
...
Step 1: Defining a REF CURSOR Type
Define a REF CURSOR type:
– ref_type_name is a type specified in subsequent
declarations.
– return_type represents a record type.
– RETURN keyword indicates a strong cursor.
TYPE ref_type_name IS REF CURSOR
[RETURN return_type];
52. 58
Step 1: Declaring a Cursor Variable
Declare a cursor variable of a cursor type:
– cursor_variable_name is the name of the cursor variable.
– ref_type_name is the name of a REF CURSOR type.
DECLARE
TYPE rt_cust IS REF CURSOR
RETURN customers%ROWTYPE;
cv_cust rt_cust;
cursor_variable_name ref_type_name;
53. 59
Step 1: Declaring a REF CURSOR Return Type
Options:
– Use %TYPE and %ROWTYPE.
– Specify a user-defined record in the RETURN clause.
– Declare the cursor variable as the formal parameter of
a stored procedure or function.
54. 60
Step 2: Opening a Cursor Variable
• Associate a cursor variable with a multiple-row
SELECT statement.
• Execute the query.
• Identify the result set:
– cursor_variable_name is the name of the
cursor variable.
– select_statement is the SQL SELECT statement.
OPEN cursor_variable_name
FOR select_statement;
55. 62
Step 3: Fetching from a Cursor Variable
• Retrieve rows from the result set one at a time.
• The return type of the cursor variable must be
compatible with the variables named in the INTO
clause of the FETCH statement.
FETCH cursor_variable_name
INTO variable_name1
[,variable_name2,. . .]
| record_name;
56. 63
Step 4: Closing a Cursor Variable
• Disable a cursor variable.
• The result set is undefined.
• Accessing the cursor variable after it is closed
raises the INVALID_CURSOR predefined
exception.
CLOSE cursor_variable_name;
57. 64
Passing Cursor Variables as Arguments
You can pass query result sets among PL/SQL-stored
subprograms and various clients.
Pointer
to the
result
set
Access by a host variable
on the client side
59. 67
Using the SYS_REFCURSOR Predefined Type
CREATE OR REPLACE PROCEDURE REFCUR
(p_num IN NUMBER)
IS
refcur sys_refcursor;
empno emp.empno%TYPE;
ename emp.ename%TYPE;
BEGIN
IF p_num = 1 THEN
OPEN refcur FOR SELECT empno, ename FROM emp;
DBMS_OUTPUT.PUT_LINE('Employee# Name');
DBMS_OUTPUT.PUT_LINE('----- -------');
LOOP
FETCH refcur INTO empno, ename;
EXIT WHEN refcur%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(empno || ' ' || ename);
END LOOP;
ELSE
....
SYS_REFCURSOR is a built-in
REF CURSOR type that allows
any result set to be associated
with it.
60. 69
Rules for Cursor Variables
• You cannot use cursor variables with remote
subprograms on another server.
• You cannot use comparison operators to test
cursor variables.
• You cannot assign a null value to cursor variables.
• You cannot use REF CURSOR types in CREATE
TABLE or VIEW statements.
• Cursors and cursor variables are not interoperable.
61. 70
Comparing Cursor Variables with Static Cursors
Cursor variables have the following benefits:
– Are dynamic and ensure more flexibility
– Are not tied to a single SELECT statement
– Hold the value of a pointer
– Can reduce network traffic
– Give access to query work areas after a
block completes
63. 72
Using the For Update Clause
• Use explicit locking to deny access to other
sessions for the duration of a transaction
• Lock the rows before the update or delete
• Syntax:
72
SELECT ...
FROM ...
FOR UPDATE [OF column_reference][NOWAIT | WAIT n];
64. 73
Handling Locked Rows
• Select for Update default is to wait for locked rows
• We can set the wait time to be limited with a time
frame or fail if rows are already locked
• We can also ask the query to skip the locked rows
and return the unlocked rows:
73
SELECT ...
FROM ...
FOR UPDATE SKIP LOCKED;
65. 74
Using WHERE CURRENT OF Clause
• When using cursor with FOR UPDATE clause, we
might want to change that same record
• We can use the WHERE CURRENT OF cursor to get a
fast ROWID access to that row
74
UPDATE employees
SET salary = 12000
WHERE CURRENT OF emp_cursor;
66. 75
Using the RETURNING Clause
• Include the RETURNING clause with an INSERT,
UPDATE, and DELETE to return column values.
• Benefit:
– Eliminates the need to SELECT the row after DML
– Fewer network round trips
– Less server CPU time
– Fewer cursors
– Less server memory is required
75
67. 76
Using RETURNING Clause Example
76
DECLARE
v_emp_sal employees.salary%type;
BEGIN
UPDATE employees e
set salary = salary * 1.2
WHERE e.employee_id = 202
RETURNING e.salary into v_emp_sal;
dbms_output.put_line('The new salary is ' ||
v_emp_sal);
END;
/
The new salary is 7200
68. 77
Using Bulk Binding
Use bulk binds to reduce context switches between
the PL/SQL engine and the SQL engine.
SQL enginePL/SQL run-time engine
PL/SQL block
FORALL j IN 1..1000
INSERT …
(OrderId(j),
OrderDate(j), …);
SQL
statement
executor
Procedural
statement
executor
69. 78
Using Bulk Binding
Bind whole arrays of values simultaneously, rather
than looping to perform fetch, insert, update, and
delete on multiple rows.
– Instead of:
– Use:
...
FOR i IN 1 .. 50000 LOOP
INSERT INTO bulk_bind_example_tbl
VALUES(...);
END LOOP; ...
...
FORALL i IN 1 .. 50000
INSERT INTO bulk_bind_example_tbl
VALUES(...);
...
70. 80
Using Bulk Binding
Use BULK COLLECT to improve performance:
CREATE OR REPLACE PROCEDURE process_customers
(p_account_mgr customers.account_mgr_id%TYPE)
IS
TYPE typ_numtab IS TABLE OF
customers.customer_id%TYPE;
TYPE typ_chartab IS TABLE OF
customers.cust_last_name%TYPE;
TYPE typ_emailtab IS TABLE OF
customers.cust_email%TYPE;
v_custnos typ_numtab;
v_last_names typ_chartab;
v_emails typ_emailtab;
BEGIN
SELECT customer_id, cust_last_name, cust_email
BULK COLLECT INTO v_custnos, v_last_names, v_emails
FROM customers
WHERE account_mgr_id = p_account_mgr;
...
END process_customers;
71. 81
Using Bulk Binding
Use the RETURNING clause to retrieve information
about the rows that are being modified:
DECLARE
TYPE typ_replist IS VARRAY(100) OF NUMBER;
TYPE typ_numlist IS TABLE OF
orders.order_total%TYPE;
repids typ_replist :=
typ_replist(153, 155, 156, 161);
totlist typ_numlist;
c_big_total CONSTANT NUMBER := 60000;
BEGIN
FORALL i IN repids.FIRST..repids.LAST
UPDATE orders
SET order_total = .95 * order_total
WHERE sales_rep_id = repids(i)
AND order_total > c_big_total
RETURNING order_total BULK COLLECT INTO Totlist;
END;
72. 83
Using SAVE EXCEPTIONS
• You can use the SAVE EXCEPTIONS keyword in your FORALL
statements:
• Exceptions raised during execution are saved in the
%BULK_EXCEPTIONS cursor attribute.
• The attribute is a collection of records with two fields:
– Note that the values always refer to the most recently executed FORALL statement.
FORALL index IN lower_bound..upper_bound
SAVE EXCEPTIONS
{insert_stmt | update_stmt | delete_stmt}
Field Definition
ERROR_INDEX Holds the iteration of the FORALL statement where the
exception was raised
ERROR_CODE Holds the corresponding Oracle error code
73. 84
Handling FORALL Exceptions
DECLARE
TYPE NumList IS TABLE OF NUMBER;
num_tab NumList :=
NumList(100,0,110,300,0,199,200,0,400);
bulk_errors EXCEPTION;
PRAGMA EXCEPTION_INIT (bulk_errors, -24381 );
BEGIN
FORALL i IN num_tab.FIRST..num_tab.LAST
SAVE EXCEPTIONS
DELETE FROM orders WHERE order_total < 500000/num_tab(i);
EXCEPTION WHEN bulk_errors THEN
DBMS_OUTPUT.PUT_LINE('Number of errors is: '
|| SQL%BULK_EXCEPTIONS.COUNT);
FOR j in 1..SQL%BULK_EXCEPTIONS.COUNT
LOOP
DBMS_OUTPUT.PUT_LINE (
TO_CHAR(SQL%BULK_EXCEPTIONS(j).error_index) ||
' / ' ||
SQLERRM(-SQL%BULK_EXCEPTIONS(j).error_code) );
END LOOP;
END;
/
75. 86
Dynamic SQL
• Used for two main reasons:
– Modify commands dynamically to retrieve or filter
columns based on input
– Run DDL and DCL commands from PL/SQL
– PLEASE don’t use it for running DMLs!
• Two main ways to run dynamic commands:
– EXECUTE IMMDIATE
– DBMS_SQL
86
76. 87
Advantages of Native Dynamic SQL
• Easy syntax:
• Accepts all command types
• Accept bind variables with the USING clause
• Allows dynamic PL/SQL creation
87
EXECUTE_IMMEDIATE dynamic_string
{
INTO { define_variable [, define_variable ...] | record_name }
| BULK COLLECT INTO { collection_name [, collection_name ...] |
:host_array_name }
}
[ USING [ IN | OUT | IN OUT ] bind_argument
[, [ IN | OUT | IN OUT ] bind_argument] ... ] [ returning_clause ] ;
77. 88
Advantages of DBMS_SQL
• DBMS_SQL supports statements with unknown
number of inputs or outputs
• We can use DESCRIBE_COLUMNS procedure in the
DBMS_SQL package to describe columns for a
cursor opened/parsed through DBMS_SQL
• DBMS_SQL Supports SQL statements larger than 32
KB
• DBMS_SQL Lets you reuse SQL statements
88
78. 89
Using Execute Immediate
89
CREATE OR REPLACE PROCEDURE del_rows
(p_condition varchar2, p_rows_deld out number)
IS
BEGIN
EXECUTE IMMEDIATE 'DELETE FROM employees ' || p_condition;
p_rows_deld:=sql%ROWCOUNT;
END;
/
DECLARE
cnt number;
BEGIN
del_rows(‘where employee_id = 201’, cnt);
END;
/
79. 90
Using DBMS_SQL
90
CREATE OR REPLACE
PROCEDURE del_rows
(p_condition in varchar2, p_rows_del out number)
is
cursor_id integer;
BEGIN
cursor_id := dbms_sql.open_cursor;
dbms_sql.parse (cursor_id,'DELETE FROM employees ‘ ||p_condition,
dbms_sql.native);
p_rows_del := dbms_sql.execute (cursor_id);
dbms_sql.close_cursor (cursor_id);
END;
/
80. 91
Transforming DBMS_SQL
Cursor into a REF CURSOR
• Starting Oracle 11g, we can transform a
DBMS_SQL cursor into a PL/SQL REF CURSOR and
vice versa
91
DBMS_SQL.TO_REFCURSOR (cursor_number IN INTEGER) RETURN SYS_REFCURSOR;
DBMS_SQL.TO_CURSOR_NUMBER (rc IN OUT SYS_REFCURSOR) RETURN INTEGER;
81. 92
Disadvantages of Dynamic SQL
• Dynamic SQL is not checked until runtime
– Syntax
– Structure validity
– Permissions
• Dynamic SQL can be used as SQL injection entry
point
92
82. 93
Understanding SQL Injection
SQL injection is a technique for maliciously
exploiting applications that use client-supplied data
in SQL statements.
– Attackers trick the SQL engine into executing
unintended commands.
– SQL injection techniques may differ, but they all exploit
a single vulnerability in the application.
– To immunize your code against SQL injection attacks,
use bind arguments or validate and sanitize all input
concatenated to dynamic SQL.
83. 94
SQL Injection: Example
-- First order attack
CREATE OR REPLACE PROCEDURE GET_EMAIL
(p_last_name VARCHAR2 DEFAULT NULL)
AS
TYPE cv_custtyp IS REF CURSOR;
cv cv_custtyp;
v_email customers.cust_email%TYPE;
v_stmt VARCHAR2(400);
BEGIN
v_stmt := 'SELECT cust_email FROM customers
WHERE cust_last_name = '''|| p_last_name || '''';
DBMS_OUTPUT.PUT_LINE('SQL statement: ' || v_stmt);
OPEN cv FOR v_stmt;
LOOP
FETCH cv INTO v_email;
EXIT WHEN cv%NOTFOUND;
DBMS_OUTPUT.PUT_LINE('Email: '||v_email);
END LOOP;
CLOSE cv;
EXCEPTION WHEN OTHERS THEN
dbms_output.PUT_LINE(sqlerrm);
dbms_output.PUT_LINE('SQL statement: ' || v_stmt);
END;
String literals that are incorrectly
validated or not validated are
concatenated into a dynamic SQL
statement, and interpreted as code by th
SQL engine.
85. 96
Reducing the Attack Surface
Use the following strategies to reduce attack surface:
– Use invoker’s rights.
– Reduce arbitrary inputs.
– Use Bind arguments.
– The Filter pitfall.
86. 97
Avoiding Privilege Escalation
• Give out privileges appropriately.
• Run code with invoker’s rights when possible.
• Ensure that the database privilege model is upheld
when using definer’s rights.
Invoker’s rights
Definer’s rights
87. 98
Using Invoker’s Rights
• Using invoker’s rights:
–Helps to limit the privileges
–Helps to minimize the security exposure.
• The following example does not use invoker's rights:
CREATE OR REPLACE
PROCEDURE change_password(p_username VARCHAR2 DEFAULT NULL,
p_new_password VARCHAR2 DEFAULT NULL)
IS
v_sql_stmt VARCHAR2(500);
BEGIN
v_sql_stmt := 'ALTER USER '||p_username ||' IDENTIFIED BY '
|| p_new_password;
EXECUTE IMMEDIATE v_sql_stmt;
END change_password;
GRANT EXECUTE ON change_password to OE, HR, SH;
1
2
Note the use of dynamic SQL with
concatenated input values.
88. 99
Using Invoker’s Rights
• OE is successful at changing the SYS password, because,
by default, CHANGE_PASSWORD executes with SYS
privileges:
• Add the AUTHID to change the privileges to the invokers:
EXECUTE sys.change_password ('SYS', 'mine')
CREATE OR REPLACE
PROCEDURE change_password(p_username VARCHAR2 DEFAULT NULL,
p_new_password VARCHAR2 DEFAULT NULL)
AUTHID CURRENT_USER
IS
v_sql_stmt VARCHAR2(500);
BEGIN
v_sql_stmt := 'ALTER USER '||p_username ||' IDENTIFIED BY '
|| p_new_password;
EXECUTE IMMEDIATE v_sql_stmt;
END change_password;
89. 100
Reducing Arbitrary Inputs
• Reduce the end-user interfaces to only those that are
actually needed.
– In a Web application, restrict users to accessing specified Web
pages.
– In a PL/SQL API, expose only those routines that are intended for
customer use.
• Where user input is required, make use of language features
to ensure that only data of the intended type can be
specified.
– Do not specify a VARCHAR2 parameter when it will be used as a
number.
– Do not use numbers if you need only positive integers; use
natural instead.
91. 102
Formatting Oracle Identifiers
– Example 1: The object name used as an identifier:
SELECT count(*) records FROM orders;
– Example 2: The object name used as a literal:
SELECT num_rows FROM user_tables
WHERE table_name = 'ORDERS';
– Example 3: The object name used as a quoted (normal
format) identifier:
• The "orders" table referenced in example 3 is a different table
compared to the orders table in examples 1 and 2.
• It is vulnerable to SQL injection.
SELECT count(*) records FROM "orders";
92. 103
Working with Identifiers in Dynamic SQL
• For your identifiers, determine:
1.Where will the input come from: user or data dictionary?
2.What verification is required?
3.How will the result be used, as an identifier or a literal value?
• These three factors affect:
– What preprocessing is required (if any) prior to calling the
verification functions
– Which DBMS_ASSERT verification function is required
– What post-processing is required before the identifier can
actually be used
93. 104
Avoiding Injection by Using
DBMS_ASSERT.ENQUOTE_LITERAL
CREATE OR REPLACE PROCEDURE Count_Rows(w in varchar2)
authid definer as
Quote constant varchar2(1) := '''';
Quote_Quote constant varchar2(2) := Quote||Quote;
Safe_Literal varchar2(32767) :=
Quote||replace(w,Quote,Quote_Quote)||Quote;
Stmt constant varchar2(32767) :=
'SELECT count(*) FROM t WHERE a='||
DBMS_ASSERT.ENQUOTE_LITERAL(Safe_Literal);
Row_Count number;
BEGIN
EXECUTE IMMEDIATE Stmt INTO Row_Count;
DBMS_OUTPUT.PUT_LINE(Row_Count||' rows');
END;
/
Verify whether the literal
is well-formed.
94. 107
Avoiding Injection by Using
DBMS_ASSERT.SIMPLE_SQL_NAME
CREATE OR REPLACE PROCEDURE show_col2 (p_colname varchar2,
p_tablename varchar2)
AS
type t is varray(200) of varchar2(25);
Results t;
Stmt CONSTANT VARCHAR2(4000) :=
'SELECT '||dbms_assert.simple_sql_name( p_colname ) || ' FROM
'|| dbms_assert.simple_sql_name( p_tablename ) ;
BEGIN
DBMS_Output.Put_Line ('SQL Stmt: ' || Stmt);
EXECUTE IMMEDIATE Stmt bulk collect into Results;
for j in 1..Results.Count() loop
DBMS_Output.Put_Line(Results(j));
end loop;
END show_col2;
Verify that the input string conforms to
the basic characteristics of a simple
SQL name.
95. 109
Using Bind Arguments
• Most common vulnerability:
– Dynamic SQL with string concatenation
• Your code design must:
– Avoid input string concatenation in dynamic SQL
– Use bind arguments, whether automatically via static
SQL or explicitly via dynamic SQL statements
96. 110
Beware of Filter Parameters
• Filter parameter:
– P_WHERE_CLAUSE is a filter.
– It is difficult to protect against SQL injection.
• Prevention methods:
– Do not specify APIs that allow arbitrary query parameters
to be exposed.
– Any existing APIs with this type of functionality must be
deprecated and replaced with safe alternatives.
stmt := 'SELECT session_id FROM sessions
WHERE' || p_where_clause;
97. 111
Autonomous Transactions
• An Autonomous Transaction is an independent
Transaction started by another transaction.
• The main transaction will hold for the AT and wait
until it is completed
• Uses PRAGMA compiler directive
• Allowed in individual routines
• Commonly used for loggers, progress bars, and
concurrent operations
111
98. 112
AT Example: Main Transaction
112
DECLARE
tmp NUMBER;
BEGIN
FOR i IN 1..10
LOOP
tmp := i;
INSERT INTO t
VALUES (TRUNC(i/2));
END LOOP;
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
logger('Failed on ' || TO_CHAR(tmp));
ROLLBACK;
RAISE;
END;
/
99. 113
AT Example: Logger (AT)
113
PROCEDURE logger (message IN VARCHAR2) IS
PRAGMA AUTONOMOUS_TRANSACTION
BEGIN
INSERT INTO logger VALUES (sysdate, message);
COMMIT;
END;
/
100. New and Useful Features
for Developers
11gR2, 12cR1 and 12cR2
114
102. 116
Using the LISTAGG Function
• For a specified measure, LISTAGG orders data
within each group specified in the ORDER BY
clause and then concatenates the values of the
measure column
• Limited to 4000 chars (in 11g, see 12cR2
enhancement!)
LISTAGG(measure_expr [, 'delimiter'])
WITHIN GROUP (order_by_clause) [OVER
query_partition_clause]
103. 117
Using LISTAGG: Example
SELECT department_id "Dept", hire_date
"Date",
last_name "Name",
LISTAGG(last_name, ', ') WITHIN GROUP
(ORDER BY hire_date, last_name)
OVER (PARTITION BY department_id) as
"Emp_list"
FROM hr.employees
WHERE hire_date < '01-SEP-2003'
ORDER BY "Dept", "Date", "Name";
104. 118
The NTH_VALUE Analytic Function
• Returns the N-th values in an ordered set of values
• Different default window: RANGE BETWEEN
UNBOUNDED PRECEDING AND CURRENT
ROW
118
NTH_VALUE (measure_expr, n)
[ FROM { FIRST | LAST } ][ { RESPECT | IGNORE } NULLS ]
OVER (analytic_clause)
105. 119
Using NTH_VALUE: Example
SELECT prod_id, channel_id, MIN(amount_sold),
NTH_VALUE ( MIN(amount_sold), 2) OVER (PARTITION BY
prod_id ORDER BY channel_id
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED
FOLLOWING) nv
FROM sh.sales
WHERE prod_id BETWEEN 13 and 16
GROUP BY prod_id, channel_id;
106. 120
Using NTH_VALUE: Example
SELECT prod_id, channel_id, MIN(amount_sold),
NTH_VALUE ( MIN(amount_sold), 2) OVER (PARTITION BY
prod_id ORDER BY channel_id
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED
FOLLOWING) nv
FROM sh.sales
WHERE prod_id BETWEEN 13 and 16
GROUP BY prod_id, channel_id;
107. 121
Virtual Columns
• Virtual columns are dynamic (not stored) columns of a table
(no views required)
• Virtual columns obtain their value be evaluating as
expression:
– Columns from the same table
– Constraints
– Function Calls (user defined)
• Might be used for
– Eliminate views
– Control table partitioning
– Manage “Binary” XMLType data
– Index values (function based index)
121
108. 122
Virtual Columns Example
122
column_name [datatype] [GENERATED ALWAYS] AS (expression) [VIRTUAL]
CREATE TABLE employees (
id NUMBER,
first_name VARCHAR2(10),
last_name VARCHAR2(10),
salary NUMBER(9,2),
comm1 NUMBER(3),
comm2 NUMBER(3),
salary1 AS (ROUND(salary*(1+comm1/100),2)),
salary2 NUMBER GENERATED ALWAYS AS
(ROUND(salary*(1+comm2/100),2)) VIRTUAL,
CONSTRAINT employees_pk PRIMARY KEY (id)
);
110. 124
What Is a Compound Trigger?
• A single trigger on a table that allows you to
specify actions for each of the following four
timing points:
– Before the firing statement
– Before each row that the firing statement affects
– After each row that the firing statement affects
– After the firing statement
111. 125
Working with Compound Triggers
• The compound trigger body supports a common
PL/SQL state that the code for each timing point
can access.
• The compound trigger common state is:
– Established when the triggering statement starts
– Destroyed when the triggering statement completes
• A compound trigger has a declaration section and
a section for each of its timing points.
112. 126
The Benefits of Using a Compound Trigger
• You can use compound triggers to:
– Program an approach where you want the actions you
implement for the various timing points to share
common data.
– Accumulate rows destined for a second table so that
you can periodically bulk-insert them
– Avoid the mutating-table error (ORA-04091)by
allowing rows destined for a second table to
accumulate and then bulk-inserting them
113. 127
Timing-Point Sections of a
Table Compound Trigger
• A compound trigger defined on a table has one or
more of the following timing-point sections.
Timing-point sections must appear in the order
shown in the table.
Timing Point Compound Trigger Section
Before the triggering statement executes BEFORE statement
After the triggering statement executes AFTER statement
Before each row that the triggering statement affects BEFORE EACH ROW
After each row that the triggering statement affects AFTER EACH ROW
114. 128
Compound Trigger Structure for Tables
CREATE OR REPLACE TRIGGER schema.trigger
FOR dml_event_clause ON schema.table
COMPOUND TRIGGER
-- Initial section
-- Declarations
-- Subprograms
-- Optional section
BEFORE STATEMENT IS ...;
-- Optional section
BEFORE EACH ROW IS ...;
-- Optional section
AFTER EACH ROW IS ...;
-- Optional section
AFTER STATEMENT IS ...;
1
2
115. 129
Trigger Restrictions on Mutating Tables
• A mutating table is:
– A table that is being modified by an UPDATE, DELETE, or INSERT
statement, or
– A table that might be updated by the effects of a DELETE CASCADE
constraint
• The session that issued the triggering statement cannot query or
modify a mutating table.
• This restriction prevents a trigger from seeing an inconsistent set
of data.
• This restriction applies to all triggers that use the FOR EACH
ROW clause.
• Views being modified in the INSTEAD OF triggers are not
considered mutating.
116. 130
Using a Compound Trigger to
Resolve the Mutating Table Error
CREATE OR REPLACE TRIGGER check_salary
FOR INSERT OR UPDATE OF salary, job_id
ON employees
WHEN (NEW.job_id <> 'AD_PRES')
COMPOUND TRIGGER
TYPE salaries_t IS TABLE OF employees.salary%TYPE;
min_salaries salaries_t;
max_salaries salaries_t;
TYPE department_ids_t IS TABLE OF employees.department_id%TYPE;
department_ids department_ids_t;
TYPE department_salaries_t IS TABLE OF employees.salary%TYPE
INDEX BY VARCHAR2(80);
department_min_salaries department_salaries_t;
department_max_salaries department_salaries_t;
-- example continues on next slide
117. 131
Using a Compound Trigger to
Resolve the Mutating Table Error
. . .
BEFORE STATEMENT IS
BEGIN
SELECT MIN(salary), MAX(salary), NVL(department_id, -1)
BULK COLLECT INTO min_Salaries, max_salaries, department_ids
FROM employees
GROUP BY department_id;
FOR j IN 1..department_ids.COUNT() LOOP
department_min_salaries(department_ids(j)) := min_salaries(j);
department_max_salaries(department_ids(j)) := max_salaries(j);
END LOOP;
END BEFORE STATEMENT;
AFTER EACH ROW IS
BEGIN
IF :NEW.salary < department_min_salaries(:NEW.department_id)
OR :NEW.salary > department_max_salaries(:NEW.department_id) THEN
RAISE_APPLICATION_ERROR(-20505,'New Salary is out of acceptable
range');
END IF;
END AFTER EACH ROW;
END check_salary;
118. 132
Compound Trigger Restrictions
• A compound trigger must be a DML trigger and defined on
either a table or a view
• An exception that occurs in one section must be handled in
that section. It cannot transfer control to another section
• :OLD and :NEW cannot appear in the declaration, BEFORE
STATEMENT, or the AFTER STATEMENT sections
• Only the BEFORE EACH ROW section can change the
value of :NEW
• The firing order of compound triggers is not guaranteed
unless you use the FOLLOWS clause
119. 133
FOLLOWS Clause
• To ensure that a trigger fires after certain other
triggers on the same object, use the FOLLOWS
clause
• Lets you order the executions of multiple triggers
relative to each other
• Applies to both compound and simple triggers
• Applies only to the section of the compound
trigger with the same timing point as the simple
trigger
133
120. 134
FOLLOWS Clause Example
• Consider two AFTER ROW ... FOR UPDATE triggers
defined on the same table. One trigger needs to
reference the :OLD value and the other trigger
needs to change the :OLD value.
• In this case, you can use the FOLLOWS clause to
order the firing sequence
134
CREATE OR REPLACE TRIGGER change_product
AFTER UPDATE of product_id ON order_items
FOR EACH ROW
FOLLOWS oe1.compute_total
BEGIN
dbms_output.put_line ('Do processing here…');
END;
122. 136
PIVOT and UNPIVOT
• You can use the PIVOT operator of the SELECT
statement to write cross-tabulation queries that
rotate the column values into new columns,
aggregating data in the process.
• You can use the UNPIVOT operator of the
SELECT statement to rotate columns into values
of a column.
PIVOT UNPIVOT
123. 137
Pivoting on the QUARTER
Column: Conceptual Example
30,000
40,000
60,000
30,000
40,000
20,000
AMOUNT_
SOLD
2,500Q1IUSAKids Jeans
2,000Q2CJapanKids Jeans
2,000Q3SUSAShorts
I
P
C
CHANNEL
Kids Jeans
Shorts
Shorts
PRODUCT
1,000Q2Germany
1,500Q4USA
Q2
QUARTER
2,500Poland
QUANTITY_
SOLD
COUNTRY
2,000
Q3
Kids Jeans
Shorts
PRODUCT
3,500
2,000
Q2
1,5002,500
Q4Q1
124. 138
Pivoting Before Oracle 11g
• Pivoting the data before 11g was a complex query
which required the use of the CASE or DECODE
functions
select product,
sum(case when quarter = 'Q1' then amount_sold else null end) Q1,
sum(case when quarter = 'Q2' then amount_sold else null end) Q2,
sum(case when quarter = 'Q3' then amount_sold else null end) Q3,
sum(case when quarter = 'Q4' then amount_sold else null end) Q4
from sales
group by product;
125. 139
PIVOT Clause Syntax
table_reference PIVOT [ XML ]
( aggregate_function ( expr ) [[AS] alias ]
[, aggregate_function ( expr ) [[AS] alias ] ]...
pivot_for_clause
pivot_in_clause )
-- Specify the column(s) to pivot whose values are to
-- be pivoted into columns.
pivot_for_clause =
FOR { column |( column [, column]... ) }
-- Specify the pivot column values from the columns you
-- specified in the pivot_for_clause.
pivot_in_clause =
IN ( { { { expr | ( expr [, expr]... ) } [ [ AS] alias] }...
| subquery | { ANY | ANY [, ANY]...} } )
126. 141
Creating a New View: Example
CREATE OR REPLACE VIEW sales_view AS
SELECT
prod_name AS product,
country_name AS country,
channel_id AS channel,
SUBSTR(calendar_quarter_desc, 6,2) AS quarter,
SUM(amount_sold) AS amount_sold,
SUM(quantity_sold) AS quantity_sold
FROM sales, times, customers, countries, products
WHERE sales.time_id = times.time_id AND
sales.prod_id = products.prod_id AND
sales.cust_id = customers.cust_id AND
customers.country_id = countries.country_id
GROUP BY prod_name, country_name, channel_id,
SUBSTR(calendar_quarter_desc, 6, 2);
127. 143
Selecting the SALES VIEW Data
SELECT product, country, channel, quarter, quantity_sold
FROM sales_view;
PRODUCT COUNTRY CHANNEL QUARTER QUANTITY_SOLD
------------ ------------ ---------- -------- -------------
Y Box Italy 4 01 21
Y Box Italy 4 02 17
Y Box Italy 4 03 20
. . .
Y Box Japan 2 01 35
Y Box Japan 2 02 39
Y Box Japan 2 03 36
Y Box Japan 2 04 46
Y Box Japan 3 01 65
. . .
Bounce Italy 2 01 34
Bounce Italy 2 02 43
. . .
9502 rows selected.
128. 144
Pivoting the QUARTER Column
in the SH Schema: Example
SELECT *
FROM
(SELECT product, quarter, quantity_sold
FROM sales_view) PIVOT (sum(quantity_sold)
FOR quarter IN ('01', '02', '03', '04'))
ORDER BY product DESC;
. . .
130. 147
Unpivoting Before Oracle 11g
• Univoting the data before 11g requires multiple
queries on the table using the
UNION ALL operator
SELECT *
FROM (
SELECT product, '01' AS quarter, Q1_value FROM sales
UNION ALL
SELECT product, '02' AS quarter, Q2_value FROM sales
UNION ALL
SELECT product, '03' AS quarter, Q3_value FROM sales
UNION ALL
SELECT product, '04' AS quarter, Q4_value FROM sales
);
131. 148
Using the UNPIVOT Operator
• An UNPIVOT operation does not reverse a PIVOT
operation; instead, it rotates data found in
multiple columns of a single row into multiple
rows of a single column.
• If you are working with pivoted data, UNPIVOT
cannot reverse any aggregations that have been
made by PIVOT or any other means.
UNPIVOT
132. 149
Using the UNPIVOT Clause
• The UNPIVOT clause rotates columns from a
previously pivoted table or a regular table into rows.
You specify:
– The measure column or columns to be unpivoted
– The name or names for the columns that result from the
UNPIVOT operation
– The columns that are unpivoted back into values of the
column specified in pivot_for_clause
• You can use an alias to map the column name to
another value.
133. 150
UNPIVOT Clause Syntax
table_reference UNPIVOT [{INCLUDE|EXCLUDE} NULLS]
-- specify the measure column(s) to be unpivoted.
( { column | ( column [, column]... ) }
unpivot_for_clause
unpivot_in_clause )
-- Specify one or more names for the columns that will
-- result from the unpivot operation.
unpivot_for_clause =
FOR { column | ( column [, column]... ) }
-- Specify the columns that will be unpivoted into values of
-- the column specified in the unpivot_for_clause.
unpivot_in_clause =
( { column | ( column [, column]... ) }
[ AS { constant | ( constant [, constant]... ) } ]
[, { column | ( column [, column]... ) }
[ AS { constant | ( constant [, constant]...) } ] ]...)
134. 151
Creating a New Pivot Table: Example
. . .
CREATE TABLE pivotedtable AS
SELECT *
FROM
(SELECT product, quarter, quantity_sold
FROM sales_view) PIVOT (sum(quantity_sold)
FOR quarter IN ('01' AS Q1, '02' AS Q2,
'03' AS Q3, '04' AS Q4));
SELECT * FROM pivotedtable
ORDER BY product DESC;
135. 152
Unpivoting the QUARTER Column : Example
• Unpivoting the QUARTER Column in the SH Schema:
SELECT *
FROM pivotedtable
UNPIVOT (quantity_sold For Quarter IN (Q1, Q2, Q3, Q4))
ORDER BY product DESC, quarter;
. . .
136. 153
More Examples…
• More information and examples could be found on
my Blog:
http://www.realdbamagic.com/he/pivot-a-table/
138. 155
32K VARCHAR2/NVARCHAR2
• Oracle 12c allows SQL VARCHAR2 to be the same size as
PL/SQL VARCHAR2
• This is not the default behavior and needs to be turned
on:
• Create table with 32k varchar2:
155
ALTER SYSTEM set MAX_STRING_SIZE = EXTENDED scope = SPFILE;
CREATE TABLE Applicants
(id NUMBER GENERATED AS IDENTITY,
first_name varchar2(30),
last_name varchar2(30),
application date,
CV varchar2(32767)
);
139. 156
Invisible Columns
• Columns might be marked “invisible”
• The invisible column will not be available unless
explicitly named
• This is very useful when doing application
migration
• The *_TAB_COLUMNS views see all of the columns
(even invisible and unused)
156
140. 157
Invisible Column Example
157
CREATE TABLE tab1 (
id NUMBER,
description VARCHAR2(50) INVISIBLE
);
INSERT INTO tab1 VALUES (1);
SELECT * FROM tab1;
ID
----------
1
INSERT INTO tab1 (id, description) VALUES (2, 'TWO');
COMMIT;
SELECT id, description
FROM tab1;
ID DESCRIPTION
---------- ----------------------------------------------
1
2 TWO
141. 158
Identity Column Type
• In previous releases, there was no direct equivalent of
the AutoNumber or Identity functionality of other
database engines
• This behavior had to be implemented using a
combination of sequences and trigger
• Oracle 12c introduces the ability to define an identity
clause against a table column defined using a numeric
type
• User should have the create sequence privilege
• Identity columns are always not null
142. 159
Identity Column Type – Options
• ALWAYS
– Forces the use of the identity. If an insert statement
references the identity column, an error is produced
• BY DEFAULT
– Allows using the identity if the column isn't referenced in
the insert statement. If the column is referenced, the
specified value will be used in place of the identity
• BY DEFAULT ON NULL
– Allows the identity to be used if the identity column is
referenced, but a value of NULL is specified
Identity.sql
143. 160
Identity Column Type – Restrictions
• You can specify only one identity column per table
• When specifying identity clause, you must specify a
numeric data type for datatype in the column
definition clause
• When specifying identity clause, you cannot specify
the DEFAULT clause in the column definition clause
• When specifying identity clause,
the NOT NULL constraint is implicitly specified
• CREATE TABLE AS SELECT will not inherit the identity
property on a column
144. 161
Default Value Using a Sequence
• You can specify CURRVAL and NEXTVAL as default
values for a column
• Default value is used when the column is not
referenced by the insert or when the DEFAULT
keyword is used
• Gives you the ability to auto-populate master-detail
relationships
– Only makes sense if you can guarantee the inserts into the
detail table would always immediately follow the insert into
the master table
Default_with_Sequence.sql
145. 162
Default Value on Explicit Nulls
• In an insert statement, when the column is
explicitly referenced, even when using the value
NULL, the default value is not used
• Oracle database 12c allows you to modify this
behavior using the ON NULL clause in the default
definition
Default_with_Null.sql
146. 163
Calling PL/SQL from SQL
• You can define PL/SQL functions and procedures in
the WITH clause of a subquery and then use them as
you would any other built-in or user-defined function
• The “;” does not work as a terminator to the SQL
statement when the PL/SQL declaration is included in
the WITH clause
• Functions defined in the PL/SQL declaration section of
the WITH clause take precedence over objects with
the same name defined at the schema level
• Provides better performance as compared with
schema level functions
PLSQL_from_SQL.sql
147. 164
Functions in the WITH Clause (12.1)
with
function sumascii (str in varchar2) return number is
x number := 0;
begin
for i in 1..length (str)
loop
x := x + ascii (substr (str, i, 1)) ;
end loop;
return x;
end;
select /*+ WITH_PLSQL */ h.EMPLOYEE_ID, h.last_name,
sumascii (h.last_name)
from hr.employees h
149. 166
Top-N Queries
• A Top-N query is used to retrieve the top or
bottom N rows from an ordered set
• Combining two Top-N queries gives you the ability
to page through an ordered set
• Oracle 12c has introduced the row limiting clause
to simplify Top-N queries
150. 167
Top-N in 12cR1
• This is ANSI syntax
• The default offset is 0
• Null values in offset, rowcount or percent will
return no rows
[ OFFSET offset { ROW | ROWS } ]
[ FETCH { FIRST | NEXT } [ { rowcount | percent PERCENT } ]
{ ROW | ROWS } { ONLY | WITH TIES } ]
151. 168
Top-N Examples
SELECT last_name, salary
FROM hr.employees
ORDER BY salary
FETCH FIRST 4 ROWS ONLY;
SELECT last_name, salary
FROM hr.employees
ORDER BY salary
FETCH FIRST 4 ROWS WITH TIES;
SELECT last_name, salary
FROM hr.employees
ORDER BY salary DESC
FETCH FIRST 10 PERCENT ROWS ONLY;
152. 169
Paging Before 12c
• Before 12c we had to use the rownum pseudo
column to filter out rows
• That will require sorting the entire rowset
SELECT val
FROM (SELECT val, rownum AS rnum
FROM (SELECT val
FROM rownum_order_test
ORDER BY val)
WHERE rownum <= 10)
WHERE rnum >= 5;
153. 170
Paging in Oracle 12c
• After 12c we have a syntax improvement for
paging using the Top-N queries
• This will use ROW_NUMBER and RANK in the
background – there is no real optimization
improvements
SELECT val
FROM rownum_order_test
ORDER BY val
OFFSET 4 ROWS FETCH NEXT 5 ROWS ONLY;
154. 171
More Examples
• More information and examples could be found on
my blog:
http://www.realdbamagic.com/he/12c-top-n-query/
156. 173
What is Pattern Matching
• Identify and group rows with consecutive values
• Consecutive in this regards – row after row
• Uses regular expression like syntax to find patterns
157. 174
Common Business Challenges
• Finding sequences of events in security
applications
• Locating dropped calls in a CDR listing
• Financial price behaviors (V-shape, W-shape U-
shape, etc.)
• Fraud detection and sensor data analysis
158. 175
MATCH_RECOGNIZE Syntax
SELECT
FROM [row pattern input table]
MATCH_RECOGNIZE
( [ PARTITION BY <cols> ]
[ ORDER BY <cols> ]
[ MEASURES <cols> ]
[ ONE ROW PER MATCH | ALL ROWS PER MATCH ]
[ SKIP_TO_option]
PATTERN ( <row pattern> )
DEFINE <definition list>
)
159. 176
Example: Sequential Employee IDs
• Our goal: find groups of users with sequences IDs
• This can be useful for detecting missing employees
in a table, or to locate “gaps” in a group
FIRSTEMP LASTEMP
---------- ----------
7371 7498
7500 7520
7522 7565
7567 7653
7655 7697
7699 7781
7783 7787
7789 7838
160. 177
Pattern Matching Example
SELECT *
FROM Emps
MATCH_RECOGNIZE (
ORDER BY emp_id
PATTERN (STRT B*)
DEFINE B AS emp_id = PREV(emp_id)+1
ONE ROW PER MATCH
MEASURES
STRT.emp_id firstemp,
LAST(emp_id) lastemp
AFTER MATCH SKIP PAST LAST ROW
);
1. Define input
2. Pattern Matching
3. Order input
4. Process pattern
5. Using defined conditions
6. Output: rows per match
7. Output: columns per row
8. Where to go after match?
161. 178
Pattern Matching Example (Actual Syntax)
SELECT *
FROM Emps
MATCH_RECOGNIZE (
ORDER BY emp_id
MEASURES
STRT.emp_id firstemp,
LAST(emp_id) lastemp
ONE ROW PER MATCH
AFTER MATCH SKIP PAST LAST ROW
PATTERN (STRT B*)
DEFINE B AS emp_id = PREV(emp_id)+1
);
1. Define input
2. Pattern Matching
3. Order input
4. Process pattern
5. Using defined conditions
6. Output: rows per match
7. Output: columns per row
8. Where to go after match?
162. 179
Oracle 11g Analytic Function Solution
select firstemp, lastemp
From (select nvl (lag (r) over (order by r), minr) firstemp, q
lastemp
from (select emp_id r,
lag (emp_id) over (order by emp_id) q,
min (emp_id) over () minr,
max (emp_id) over () maxr
from emps e1)
where r != q + 1 -- groups including lower end
union
select q,
nvl (lead (r) over (order by r), maxr)
from ( select emp_id r,
lead (emp_id) over (order by emp_id) q,
min (emp_id) over () minr,
max (emp_id) over () maxr
from emps e1)
where r + 1 != q -- groups including higher end
);
163. 180
Supported Regular Expression Patterns
• Concatenation: No operator between elements.
• Quantifiers:
– * 0 or more matches.
– + 1 or more matches
– ? 0 or 1 match.
– {n} Exactly n matches.
– {n,} n or more matches.
– {n, m} Between n and m (inclusive) matches.
– {, m} Between 0 an m (inclusive) matches.
• Alternation: |
• Grouping: ()
164. 181
Few Last Tips
• Test all cases: pattern matching can be very tricky
• Don’t forget to test your data with no matches
• There is no LISTAGG and no DISTINCT when
using match recognition
• Pattern variables cannot be used as bind variables
165. 182
More 12c Developers’ Features…
• Session-specific sequence
• Truncate CASCADE command
• Temporal Validity
• Temporary Undo
• Online DML Operations
• And tons of new features for DBAs too!
182
166. 183
More Examples…
• More information and examples could be found on
my Blog:
http://www.realdbamagic.com/he/pivot-a-table/
http://www.realdbamagic.com/he/12c-top-n-query/
http://www.realdbamagic.com/he/with-pl-sql-oracle-12c/
http://www.realdbamagic.com/he/session-level-sequence-
12c/
183
168. 185
Object Names Length
• Up to Oracle 12cR2, objects name length (tables,
columns, indexes, constraints etc.) were limited to
30 chars
• Starting Oracle 12cR2, length is now limited to 128
bytes
create table with_a_really_really_really_really_really_long_name (
and_lots_and_lots_and_lots_and_lots_and_lots_of int,
really_really_really_really_really_long_columns int
);
169. 186
LISTAGG in Oracle 12c
• Limited to output of 4000 chars or 32000 with
extended column sizes
• Oracle 12cR2 provides overflow handling:
• Example:
listagg (
measure_expr, ','
[ on overflow (truncate|error) ]
[ text ] [ (with|without) count ]
) within group (order by cols)
select listagg(table_name, ',' on overflow truncate)
within group (order by table_name) table_names
from dba_tables
170. 187
Verify Data Type Conversions (12.2)
• If we try to validate using regular conversion we
might hit an error:
ORA-01858: a non-numeric character was found where a numeric
was expected
• Use validate_conversion to validate the data
without an error
select t.*
from dodgy_dates t
where validate_conversion(is_this_a_date as date) = 1;
select t.*
from dodgy_dates t
where validate_conversion(is_this_a_date as date, 'yyyymmdd') = 1;
171. 188
Handle Casting Conversion Errors (12.2)
• Let’s say we convert the value of a column using
cast. What happens if some of the values doesn’t
fit?
• The cast function can now handle conversion
errors:
select cast (
'not a date' as date
default date'0001-01-01' on conversion error
) dt
from dual;
172. 189
JSON in 12.2.0.1
• JSON in 12cR1 used to work with JSON documents
stored in the database
• 12cR2 brought the ability to create and modify
JSON:
– JSON_object
– JSON_objectagg
– JSON_array
– JSON_arrayagg
173. 190
Handling JSON Documents from PL/SQL
• 12.2.0.1 also introduced PL/SQL object to handle JSON.
The key object types are:
– json_element_t – a supertype for json docs and arrays
– json_document_t – for working with JSON documents
– json_array_t – for working with JSON arrays
• The treat function casts elements to the right type
emps := treat(doc.get('employees') as json_array_t);
for i in 0 .. emps.get_size - 1 loop
emp := treat(emps.get(i) as json_object_t);
emp.put('title', '');
emp.put('name', upper(emp.get_String('name')));
end loop;
174. 191
More 12c Developers’ Features…
• Approximate Query Enhancements
• PL/SQL Code Coverage using new DBMS package:
dbms_plsql_code_coverage
• Partitions enhancements
– List partition major changes: Auto-list, multi-column
– Read only partitions
– More…
• Currently available only in the cloud environment
177. 194
Native and Interpreted Compilation
Two compilation methods:
• Interpreted compilation
– Default compilation method
– Interpreted at run time
• Native compilation
– Compiles into native code
– Stored in the SYSTEM tablespace
178. 195
Deciding on a Compilation Method
• Use the interpreted mode when (typically during
development):
– You are using a debugging tool, such as SQL Developer
– You need the code compiled quickly
• Use the native mode when (typically post
development):
– Your code is heavily PL/SQL based
– You are looking for increased performance in production
Native
Interpreted
179. 196
Setting the Compilation Method
• PLSQL_CODE_TYPE: Specifies the compilation mode for the
PL/SQL library units
• PLSQL_OPTIMIZE_LEVEL: Specifies the optimization level to be
used to compile the PL/SQL library units
• In general, for fastest performance, use the following setting:
PLSQL_CODE_TYPE = { INTERPRETED | NATIVE }
PLSQL_OPTIMIZE_LEVEL = { 0 | 1 | 2 | 3}
PLSQL_CODE_TYPE = NATIVE
PLSQL_OPTIMIZE_LEVEL = 2
180. 198
Viewing the Compilation Settings
• Use the USER|ALL|DBA_PLSQL_OBJECT_SETTINGS data
dictionary views to display the settings for a PL/SQL object:
DESCRIBE ALL_PLSQL_OBJECT_SETTINGS
Name Null? Type
------------------------- -------- --------------------
OWNER NOT NULL VARCHAR2(30)
NAME NOT NULL VARCHAR2(30)
TYPE VARCHAR2(12)
PLSQL_OPTIMIZE_LEVEL NUMBER
PLSQL_CODE_TYPE VARCHAR2(4000)
PLSQL_DEBUG VARCHAR2(4000)
PLSQL_WARNINGS VARCHAR2(4000)
NLS_LENGTH_SEMANTICS VARCHAR2(4000)
PLSQL_CCFLAGS VARCHAR2(4000)
PLSCOPE_SETTINGS VARCHAR2(4000)
182. 200
Setting Up a Database for Native Compilation
• This requires DBA privileges.
• The PLSQL_CODE_TYPE compilation parameter
must be set to NATIVE.
• The benefits apply to all the built-in PL/SQL
packages that are used for many database
operations.
ALTER SYSTEM SET PLSQL_CODE_TYPE = NATIVE;
183. 201
Compiling a Program Unit for Native Compilation
SELECT name, plsql_code_type, plsql_optimize_level
FROM user_plsql_object_settings
WHERE name = 'ADD_ORDER_ITEMS';
NAME PLSQL_CODE_T PLSQL_OPTIMIZE_LEVEL
---------------------- ------------ --------------------
ADD_ORDER_ITEMS INTERPRETED 2
ALTER SESSION SET PLSQL_CODE_TYPE = 'NATIVE';
ALTER PROCEDURE add_order_items COMPILE;
SELECT name, plsql_code_type, plsql_optimize_level
FROM user_plsql_object_settings
WHERE name = 'ADD_ORDER_ITEMS';
NAME PLSQL_CODE_T PLSQL_OPTIMIZE_LEVEL
---------------------- ------------ --------------------
ADD_ORDER_ITEMS NATIVE 2
1
2
3
4
184. 202
PL/SQL Compile-Time Warnings
• We can turn on checking for certain warning
conditions
• Warning messages can be issued during compilation
of PL/SQL subprograms (not for anonymous blocks )
• Use the SQL*Plus SHOW ERRORS command or query
the USER_ERRORS data dictionary view, to see any
warnings generated during compilation
• PL/SQL warning messages use the prefix PLW
• Use PLSQL_WARNINGS initialization parameter, or the
DBMS_WARNING package
202
185. 203
PL/SQL Warning Categories
• SEVERE: Messages for conditions that might cause
unexpected behavior or wrong results, such as aliasing
problems with parameters
• PERFORMANCE: Messages for conditions that might cause
performance problems, such as passing a VARCHAR2 value
to a NUMBER column in an INSERT statement.
• INFORMATIONAL: Messages for conditions that do not have
an effect on performance or correctness, but that you might
want to change to make the code more maintainable, such
as unreachable code that can never be executed.
• All: refer to all warning messages
203
186. 204
PLSQL_WARNINGS Parameter
• Can be set at
– System level
– Session level
– Single compilation level
204
ALTER SYSTEM SET PLSQL_WARNINGS='ENABLE:PERFORMANCE';
ALTER SESSION SET PLSQL_WARNINGS='DISABLE:ALL';
ALTER SESSION SET PLSQL_WARNINGS='ENABLE:SEVERE',
'DISABLE:PERFORMANCE', 'ERROR:07204';
ALTER PROCEDURE query_emp COMPILE
PLSQL_WARNINGS='ENABLE:ALL';
187. 205
PLW-06009 Warning Message
• This warning means that the OTHERS handler of
PL/SQL subroutine can exit without executing
some form of RAISE or a call to the standard
RAISE_APPLICATION_ERROR procedure.
• Good programming practices suggest that the
OTHERS handler should pass an exception upward
to avoid the risk of having exceptions go unnoticed
205
188. 206
Pragma Deprecate (12.2)
• Mark a function as deprecated
alter session set plsql_warnings = 'enable:(6019,6020,6021,6022)';
create or replace procedure your_old_code is
pragma deprecate ( your_old_code,
'This is deprecated. Use new_code instead!' );
begin
-- old code here
null;
end your_old_code;
/
show error
Warning(2,3): PLW-06019: entity YOUR_OLD_CODE is deprecated
189. 207
Pragma Deprecate (cont.)
• Errors will show when compiling calling code:
alter session set plsql_warnings = 'error:6020';
create or replace procedure calling_old_code is
begin
your_old_code();
end calling_old_code;
/
SQL> show error
Errors for PROCEDURE CALLING_OLD_CODE:
LINE/COL ERROR
-------- ---------------------------------------------------------------
4/3 PLS-06020: reference to a deprecated entity: YOUR_OLD_CODE
declared in unit YOUR_OLD_CODE[1,11]. This is deprecated. Use
new_code instead!
191. 209
Intra Unit Inlining
• Definition:
– Inlining is defined as the replacement of a call to
subroutine with a copy of the body of the subroutine that is
called.
– The copied procedure generally runs faster than the
original.
– The PL/SQL compiler can automatically find the calls that
should be inlined.
• Benefits:
– Inlining can provide large performance gains when applied
judiciously by a factor of 2–10 times.
192. 210
Use of Inlining
• Influence implementing inlining via two methods:
– Oracle parameter PLSQL_OPTIMIZE_LEVEL
– PRAGMA INLINE
• Recommend that you:
– Inline small programs
– Inline programs that are frequently executed
• Use performance tools to identify hot spots
suitable for inline applications:
– plstimer
193. 211
Inlining Concepts
• Noninlined program:
CREATE OR REPLACE PROCEDURE small_pgm
IS
a NUMBER;
b NUMBER;
PROCEDURE touch(x IN OUT NUMBER, y NUMBER)
IS
BEGIN
IF y > 0 THEN
x := x*x;
END IF;
END;
BEGIN
a := b;
FOR I IN 1..10 LOOP
touch(a, -17);
a := a*b;
END LOOP;
END small_pgm;
194. 212
Inlining Concepts
• Examine the loop after inlining:
...
BEGIN
a := b;
FOR i IN 1..10 LOOP
IF –17 > 0 THEN
a := a*a;
END IF;
a := a*b;
END LOOP;
END small_pgm;
...
195. 213
Inlining Concepts
• The loop is transformed in several steps:
a := b;
FOR i IN 1..10 LOOP ...
IF false THEN
a := a*a;
END IF;
a := a*b;
END LOOP;
a := b;
FOR i IN 1..10 LOOP ...
a := a*b;
END LOOP;
a := b;
a := a*b;
FOR i IN 1..10 LOOP ...
END LOOP;
a := b*b;
FOR i IN 1..10 LOOP ...
END LOOP;
196. 214
Inlining: Example
• Set the PLSQL_OPTIMIZE_LEVEL session-level
parameter to a value of 2 or 3:
– Setting it to 2 means no automatic inlining is attempted.
– Setting it to 3 means automatic inlining is attempted and no
pragmas are necessary.
• Within a PL/SQL subroutine, use PRAGMAINLINE
– NO means no inlining occurs regardless of the level and
regardless of the YES pragmas.
– YES means inline at level 2 of a particular call and increase the
priority of inlining at level 3 for the call.
ALTER PROCEDURE small_pgm COMPILE
PLSQL_OPTIMIZE_LEVEL = 3 REUSE SETTINGS;
197. 215
Inlining: Example
• After setting the PLSQL_OPTIMIZE_LEVEL
parameter, use a pragma:
CREATE OR REPLACE PROCEDURE small_pgm
IS
a PLS_INTEGER;
FUNCTION add_it(a PLS_INTEGER, b PLS_INTEGER)
RETURN PLS_INTEGER
IS
BEGIN
RETURN a + b;
END;
BEGIN
pragma INLINE (small_pgm, 'YES');
a := add_it(3, 4) + 6;
END small_pgm;
198. 216
Inlining: Guidelines
• Pragmas apply only to calls in the next statement
following the pragma.
• Programs that make use of smaller helper subroutines
are good candidates for inlining.
• Only local subroutines can be inlined.
• You cannot inline an external subroutine.
• Cursor functions should not be inlined.
• Inlining can increase the size of a unit.
• Be careful about suggesting to inline functions that
are deterministic.
200. 218
Invalidation of Dependent Objects
• Procedure A is a direct dependent of View B. View B is a direct dependent
of Table C. Procedure A is an indirect dependent of Table C.
• Direct dependents are invalidated only by changes to the referenced
object that affect them.
• Indirect dependents can be invalidated by changes to the reference object
that do not affect them.
View B Table CProcedure A
201. 219
More Precise Dependency Metadata
• Before 11g, adding column D to table T invalidated the
dependent objects.
• Oracle Database 11g records additional, finer-grained
dependency management:
– Adding column D to table T does not impact view V and does
not invalidate the dependent objects
Procedure P Function FView V
Columns: A,B
Table T
Columns: A,B
Add column D
202. 220
Fine-Grained Dependency Management
• In Oracle Database 11g, dependencies are now
tracked at the level of element within unit.
• Element-based dependency tracking covers the
following:
– Dependency of a single-table view on its base table
– Dependency of a PL/SQL program unit (package
specification, package body, or subprogram) on the
following:
• Other PL/SQL program units
• Tables
• Views
203. 221
Fine-Grained Dependency Management:
Example 1
CREATE TABLE t2 (col_a NUMBER, col_b NUMBER, col_c NUMBER);
CREATE VIEW v AS SELECT col_a, col_b FROM t2;
ALTER TABLE t2 ADD (col_d VARCHAR2(20));
SELECT ud.name, ud.type, ud.referenced_name,
ud.referenced_type, uo.status
FROM user_dependencies ud, user_objects uo
WHERE ud.name = uo.object_name AND ud.name = 'V';
SELECT ud.name, ud.type, ud.referenced_name,
ud.referenced_type, uo.status
FROM user_dependencies ud, user_objects uo
WHERE ud.name = uo.object_name AND ud.name = 'V';
204. 222
Fine-Grained Dependency Management:
Example 1
ALTER TABLE t2 MODIFY (col_a VARCHAR2(20));
SELECT ud.name, ud.referenced_name, ud.referenced_type,
uo.status
FROM user_dependencies ud, user_objects uo
WHERE ud.name = uo.object_name AND ud.name = 'V';
205. 223
Fine-Grained Dependency Management:
Example 2
CREATE PACKAGE pkg IS
PROCEDURE proc_1;
END pkg;
/
CREATE OR REPLACE PROCEDURE p IS
BEGIN
pkg.proc_1();
END p;
/
CREATE OR REPLACE PACKAGE pkg
IS
PROCEDURE proc_1;
PROCEDURE unheard_of;
END pkg;
/
206. 224
Guidelines for Reducing Invalidation
• To reduce invalidation of dependent objects:
Add new items to the
end of the package
Reference each table
through a view
207. 225
Object Revalidation
• An object that is not valid when it is referenced
must be validated before it can be used.
• Validation occurs automatically when an object is
referenced; it does not require explicit user action.
• If an object is not valid, its status is either
COMPILED WITH ERRORS, UNAUTHORIZED,
or INVALID.
209. 227
Tuning PL/SQL
• Memory handing in PL/SQL
• Global Temporary Tables (GTT)
• PL/SQL result cache
• Tips and Tricks
227
210. 228
Packages: Memory Issues
• Create packages that contain logically related
program units
• Reserve space for large allocations:
– Set the SHARED_POOL_RESERVED_SIZE initialization
parameter
• Prevent large or frequently used objects from
being aged out:
– Use the DBMS_SHARED_POOL package
228
ORA-04031: unable to allocate 4160 bytes of shared memory..
211. 229
Pinning Objects
• Use dbms_shared_pool package:
• Flags:
– P – Package, Procedure or Function
– T – Type
– R – Trigger
– Q – Sequence
229
DBMS_SHARED_POOL.KEEP(object_name, flag)
DBMS_SHARED_POOL.UNKEEP(object_name, flag)
212. 230
Reusing Package Memory
Pragma SERIALLY_REUSABLE
• Memory is used more efficiently for scalability
(more users consume more memory)
• Package global memory is kept in the SGA (instead
of the UGA) and is reused for different users
• Package global memory is only used within a unit
of work (a client-server call or a server to different
server call)
• Memory can be released and reused by another
user
230
213. 231
SERIALLY_REUSABLE - Example
231
CREATE OR REPLACE PACKAGE maintain_state
IS
pragma serially_reusable;
num1 number:= 0;
END maintain_state;
/
CREATE OR REPLACE PACKAGE regular_state
IS
num1 number:= 0;
END regular_state;
/
215. 233
SERIALLY_REUSABLE - Example
First Run Second Run
233
THE MAINTAIN PACKAGE
Original Value: 0
New Value: 10
THE REGULAR PACKAGE
Original Value: 0
New Value: 10
THE MAINTAIN PACKAGE
Original Value: 10
New Value: 20
THE REGULAR PACKAGE
Original Value: 10
New Value: 20
THE MAINTAIN PACKAGE
Original Value: 0
New Value: 10
THE REGULAR PACKAGE
Original Value: 20
New Value: 30
THE MAINTAIN PACKAGE
Original Value: 10
New Value: 20
THE REGULAR PACKAGE
Original Value: 30
New Value: 40
216. 234
SERIALLY_REUSABLE – Side Effect
• Since we’re giving up state managing, we can now
avoid ORA-4068 when compiling
• For more information, visit my blog:
http://www.realdbamagic.com/he/solving-ora-04068/
234234234
217. 235
Passing Data Between PL/SQL Programs
• The flexibility built into PL/SQL enables you to
pass:
– Simple scalar variables
– Complex data structures
• You can use the NOCOPY hint to improve
performance with the IN OUT parameters.
218. 236
NOCOPY Hint
• The hint enables the PL/SQL compiler to pass OUT
and IN OUT parameters by reference, as opposed
to passing by value
• Enhances performance by reducing overhead
when passing parameters since less memory is
being used
• The Compiler will ignore the hint if it is not
possible to reference the original structure (type
conversion, constraints, for loop variable, etc.)
236
219. 237
NOCOPY - Example
237
CREATE OR REPLACE PACKAGE show_emp_pkg
IS
TYPE EmpTabTyp IS TABLE OF emp%ROWTYPE INDEX BY BINARY_INTEGER;
PROCEDURE show_emp (p_Deptno IN NUMBER,
p_EmpTab OUT NOCOPY EmpTabTyp);
END;
/
220. 238
Using the PARALLEL_ENABLE Hint
• Can be used in functions as an optimization hint
• Indicates that a function can be used in a
parallelized query or parallelized DML statement
CREATE OR REPLACE FUNCTION f2 (p_p1 NUMBER)
RETURN NUMBER PARALLEL_ENABLE IS
BEGIN
RETURN p_p1 * 2;
END f2;
221. 239
Global Temporary Tables
• Large result sets can be stored in Table Variables or Temporary Tables
• Temporary tables can be created to hold session-private data that
exists only for the duration of a transaction or session.
• Each session sees its own separate set of rows
• DML locks are not acquired on the data
• We can create indexes, views, and triggers on temporary tables
• Using temporary tables instead of program variables for large
• record sets, can reduce memory consumption
239
CREATE GLOBAL TEMPORARY TABLE hr.employees_temp
ON COMMIT PRSERVE ROWS;
223. 241
What Is Result Caching?
• The result cache allows SQL query and PL/SQL function
results to be stored in cache memory.
• Subsequent executions of the same query or function can
be served directly out of the cache, improving response
times.
• This technique can be especially effective for SQL queries
and PL/SQL functions that are executed frequently.
• Cached query results become invalid when the database
data accessed by the query is modified.
Data dictionary
cache
Library
cache
SGA
Result
cache
Shared pool
224. 242
Increasing Result Cache Memory Size
• You can increase the small, default result cache memory
size by using the RESULT_CACHE_MAX_SIZE
initialization parameter.
SGA
Default
result
cache
Shared pool
Increased
result
cache
225. 243
Setting Result_Cache_Max_Size
• Set Result_Cache_Max_Size from the
command line or in an initialization file created by a
DBA.
• The cache size is dynamic and can be changed either
permanently or until the instance is restarted.
SQL> ALTER SYSTEM SET result_cache_max_size = 2M SCOPE =
MEMORY;
System altered.
SQL> SELECT name, value
2 FROM v$parameter
3 WHERE name = 'result_cache_max_size';
NAME VALUE
---------------------------------------- ------------------
result_cache_max_size 2097152
1 row selected.
226. 244
Enabling Query Result Cache
• Use the RESULT_CACHE_MODE initialization
parameter in the database initialization parameter
file.
• RESULT_CACHE_MODE can be set to:
– MANUAL (default): You must add the RESULT_CACHE
hint to your queries for the results to be cached.
– FORCE: Results are always stored in the result cache
memory, if possible.
227. 245
SQL Query Result Cache
• Definition:
– Cache the results of the current query or query
fragment in memory, and then use the cached results
in future executions of the query or query fragments.
– Cached results reside in the result cache memory
portion of the SGA.
• Benefits:
– Improved performance
228. 246
SQL Query Result Cache
• Scenario:
– You need to find the greatest average value of credit limit grouped by
state over the whole population.
– The query returns a large number of rows being analyzed to yield a
few or one row.
– In your query, the data changes fairly slowly (say every hour) but the
query is repeated fairly often (say every second).
• Solution:
– Use the new optimizer hint /*+ result_cache */ in your query:
SELECT /*+ result_cache */
AVG(cust_credit_limit), cust_state_province
FROM sh.customers
GROUP BY cust_state_province;
229. 247
Clearing the Shared Pool and Result Cache
--- flush.sql
--- Start with a clean slate. Flush the cache and shared
pool.
--- Verify that memory was released.
SET ECHO ON
SET FEEDBACK 1
SET SERVEROUTPUT ON
execute dbms_result_cache.flush
alter system flush shared_pool
/
230. 248
PL/SQL Function Result Cache
• Definition:
– Enables data that is stored in cache to be shared across
sessions
– Stores the function result cache in an SGA, making it
available to any session that runs your application
• Benefits:
– Improved performance
– Improved scalability
231. 249
Marking PL/SQL Function Results to Be Cached
• Scenario:
– You need a PL/SQL function that derives a complex metric.
– The data that your function calculates changes slowly, but the
function is frequently called.
• Solution:
– Use the new RESULT_CACHE clause in your function
definition.
– You can also have the cache purged when a dependent table
experiences a DML operation, by using the RELIES_ON clause.
232. 250
CREATE OR REPLACE FUNCTION ORD_COUNT(cust_no number)
RETURN NUMBER
RESULT_CACHE RELIES_ON (orders)
IS
V_COUNT NUMBER;
BEGIN
SELECT COUNT(*) INTO V_COUNT
FROM orders
WHERE customer_id = cust_no;
return v_count;
end;
Creating a PL/SQL Function
Using the RESULT_CACHE Clause
• Include the RESULT_CACHE option in the function definition.
• Optionally, include the RELIES_ON clause.
Specifies that the result
should be cached
Specifies the table upon which
the function relies
(not needed in 11.2+)
233. 251
Using the DETERMINISTIC Clause with Functions
• Specify DETERMINISTIC to indicate that the
function returns the same result value whenever it is
called with the same values for its arguments.
• This helps the optimizer avoid redundant function
calls.
• If a function was called previously with the same
arguments, the optimizer can elect to use the
previous result.
• Do not specify DETERMINISTIC for a function
whose result depends on the state of session variables
or schema objects.
234. 252
Calling the PL/SQL Function Inside a Query
select cust_last_name, ord_count(customer_id) no_of_orders
from customers
where cust_last_name = 'MacGraw'
239. 257
Confirming That the Cached Result Was Used
select type, namespace,status, scan_count,name
from v$result_cache_objects
/
240. 258
PL/SQL Result Cache Pitfall
• Beware of result caching of timed actions:
DBMS_LOCK.SLEEP will also be cached overriding
the sleep
• Cannot be used with invoker's rights or in an
anonymous block
• Cannot be used with pipelined table function
• Cannot be used with OUT or IN OUT parameters.
258
242. 260
Tuning PL/SQL Code
You can tune your PL/SQL code by:
– Identifying the data type and constraint issues
• Data type conversion
• The NOT NULL constraint
• PLS_INTEGER
• SIMPLE_INTEGER
– Writing smaller executable sections of code
– Comparing SQL with PL/SQL
– Rephrasing conditional statements
243. 261
DECLARE
n NUMBER;
BEGIN
n := n + 15; -- converted
n := n + 15.0; -- not converted
...
END;
Avoiding Implicit Data Type Conversion
– PL/SQL performs implicit conversions between
structurally different data types.
– Example: When assigning a PLS_INTEGER variable to
a NUMBER variable
strings
dates
numbers
244. 262
Understanding the NOT NULL Constraint
PROCEDURE calc_m IS
m NUMBER; --no constraint
...
BEGIN
m := a + b;
IF m IS NULL THEN
-- raise error
END IF;
END;
PROCEDURE calc_m IS
m NUMBER NOT NULL:=0;
a NUMBER;
b NUMBER;
BEGIN
m := a + b;
END;
The value of the expression a + b is
assigned to a temporary variable,
which is then tested for nullity.
A better way to check nullity; no
performance overhead
245. 263
Using the PLS_INTEGER Data Type for Integers
Use PLS_INTEGER when dealing with integer data.
– It is an efficient data type for integer variables.
– It requires less storage than INTEGER or NUMBER.
– Its operations use machine arithmetic, which is faster
than library arithmetic.
246. 264
Using the SIMPLE_INTEGER Data Type
• Definition:
– Is a predefined subtype
– Has the range –2147483648 .. 2147483648
– Does not include a null value
– Is allowed anywhere in PL/SQL where the PLS_INTEGER data
type is allowed
• Benefits:
– Eliminates the overhead of overflow
checking
– Is estimated to be 2–10 times faster
when compared with the PLS_INTEGER
type with native PL/SQL compilation
247. 265
Comparing SQL with PL/SQL
Each has its own benefits:
• SQL:
– Accesses data in the database
– Treats data as sets
• PL/SQL:
– Provides procedural capabilities
– Has more flexibility built into the language
248. 266
Comparing SQL with PL/SQL
• Some simple set processing is markedly faster than the equivalent
PL/SQL.
• Avoid using procedural code when it may be better to use SQL.
...FOR I IN 1..5600 LOOP
counter := counter + 1;
SELECT product_id, warehouse_id
INTO v_p_id, v_wh_id
FROM big_inventories WHERE v_p_id = counter;
INSERT INTO inventories2 VALUES(v_p_id, v_wh_id);
END LOOP;...
BEGIN
INSERT INTO inventories2
SELECT product_id, warehouse_id
FROM main_inventories;
END;
249. 267
Rephrasing Conditional
Control Statements
If your business logic results in one condition being
true, use the ELSIF syntax for mutually exclusive
clauses:
IF v_acct_mgr = 145 THEN
process_acct_145;
END IF;
IF v_acct_mgr = 147 THEN
process_acct_147;
END IF;
IF v_acct_mgr = 148 THEN
process_acct_148;
END IF;
IF v_acct_mgr = 149 THEN
process_acct_149;
END IF;
IF v_acct_mgr = 145
THEN
process_acct_145;
ELSIF v_acct_mgr = 147
THEN
process_acct_147;
ELSIF v_acct_mgr = 148
THEN
process_acct_148;
ELSIF v_acct_mgr = 149
THEN
process_acct_149;
END IF;
251. 269
SQL*Plus
• Introduced in Oracle 5 (1985)
• Looks very simple but has tight integration with
other Oracle infrastructure and tools
• Very good for reporting, scripting, and automation
• Replaced old CLI tool called …
UFI (“User Friendly Interface”)
252. 270
What’s Wrong With SQL*Plus?
• Nothing really wrong with SQL*Plus – it is being
updated constantly but it is missing a lot of
functionality
• SQL*Plus forces us to use GUI tools to complete
some basic tasks
• Easy to understand, a bit hard to use
• Not easy for new users or developers
253. 271
Using SQL Developer
• SQL Developer is a free GUI tool to handle common
database operations
• Comes with Oracle client installation starting Oracle
11g
• Good for development and management of databases
– Developer mode
– DBA mode
– Modeling mode
• Has a Command Line interface (SDCLI) – but it’s not
interactive
254. 272
SQL Developer Command Line (SQLcl)
• The SQL Developer Command Line (SQLcl, priv.
SDSQL) is a new command line interface (CLI) for SQL
developers, report users, and DBAs
• It is part of the SQL Developer suite – developed by
the same team: Oracle Database Development Tools
Team
• Does (or will do) most of what SQL*Plus can do, and
much more
• Main focus: making life easier for CLI users
• Minimal installation, minimal requirements
255. 273
Current Status (November 2016)
• Production as of September 2016
– current version: 4.2.0.16.308.0750, November 3, 2016
• New version comes out every couple of months
– Adding support for existing SQL*Plus commands/syntax
– Adding new commands and functionality
• The team is accepting bug reports and enhancement
requests
from the public
• Active community on OTN forums!
256. 274
Prerequisites
• Very small footprint: 16 MB
• Tool is Java based so it can run on Windows, Linux,
and OS/X
• Java 7/8 JRE (runtime environment - no need for
JDK)
• No need for installer or setup
• No need for any other additional software or
special license
• No need for an Oracle Client
260. 278
Connecting to the Database
• When no Oracle Client - using thin connection:
EZConnect connect style out of the box
connect host:port/service
• Support TNS, Thick and LDAP connection when
Oracle home detected
• Auto-complete connection strings from last
connections AND tnsnames.ora
261. 279
Object Completion and Easy Edit
• Use the tab key to complete commands
• Can be used to list tables, views or other queriable
objects
• Can be used to replace the * with actual column
names
• Use the arrow keys to move around the command
• Use CTRL+W and CTRL+S to jump to the
beginning/end of commands
262. 280
Command History
• 100 command history buffer
• Commands are persistent between sessions (watch out for
security!)
• Use UP and DOWN arrow keys to access old commands
• Usage:
history
history usage
History script
history full
History clear [session?]
• Load from history into command buffer:
history <number>
263. 281
Describe, Information and Info+
• Describe lists the column of the tables just like
SQL*Plus
• Information shows column names, default values,
indexes and constraints.
• In 12c database information shows table statistics and
In memory status
• Works for table, views, sequences, and code objects
• Info+ shows additional information regarding column
statistics and column histograms
264. 282
SHOW ALL and SHOW ALL+
• The show all command is familiar from SQL*Plus –
it will show all the parameters for the SQL*Plus
settings
• The show all+ command will show the show all
command and some perks: available tns entries,
list of pdbs, connection settings, instance settings,
nls settings, and more!
265. 283
Pretty Input
• Using the SQL Developer formatting rules, it will
change our input into well formatted commands.
• Use the SQLFORMATPATH to point to the SQL
Developer rule file (XML)
SQL> select * from dual;
D
-
X
SQL> format buffer;
1 SELECT
2 *
3 FROM
4* dual
266. 284
SQL*Plus Output
• SQL*Plus output is generated as text tables
• We can output the data as HTML but the will take
over everything we do in SQL*Plus (i.e. describe
command)
• We can’t use colors in our output
• We can’t generate other types of useful outputs
(CSV is really hard for example)
267. 285
Generating Pretty Output
• Outputting query results becomes easier with the “set
sqlformat” command (also available in SQL
Developer)
• We can create a query in the “regular” way and then
switch between the different output styles:
– ANSIConsole
– Fixed column size output
– XML or JSON output
– HTML output generates a built in search field and a
responsive html output for the result only
268. 286
Generating Other Useful Outputs
• We can generate loader ready output (with “|” as
a delimiter)
• We can generate insert commands
• We can easily generate CSV output
• Usage:
set sqlformat {
csv,html,xml,json,ansiconsole,insert,
loader,fixed,default}