Database statistics are not limited to tables, columns, and indexes. PL/SQL functions also have a number of associated statistics, namely costs (CPU, I/O, network), selectivity, and cardinality (for functions that return collections). These statistics have default values that only somewhat represent reality. However, these values are always used by Oracle's cost-based optimizer to build execution plans. This session uses real-life examples to illustrate how properly managed PL/SQL statistics can significantly improve executions plans. It also demonstrates that Oracle's extensible optimizer is flexible enough to support packaged functions.
This presentation is the attempt to switch sides and show code management from the developer's point of view. It stays outside of various VCS solutions and focuses on hands-on approaches: activity control via system triggers, conditional compilation, synonym manipulation, utilization of Edition-Based Redefinition (EBR).
The document discusses views in Oracle databases and how they have evolved beyond simple stored SQL queries. Views can now serve as an isolation layer between applications and tables, accept DML operations directly or through triggers, and include complex functionality through features like parameterized conditions, dynamic SQL, and INSTEAD OF triggers. The document outlines techniques for optimizing DML operations on views, such as using dynamic SQL to only update changed columns, and leveraging compound triggers for shared program logic. It also warns of performance issues that can arise from logical primary keys on views.
Managing Unstructured Data: Lobs in the World of JSONMichael Rosenblum
This document discusses managing unstructured JSON data in Oracle databases. It describes how a company initially stored JSON files in VARCHAR2 columns, but then the files grew larger than 4000 characters requiring a change to CLOB storage. This change caused issues until developers understood that CLOBs have different access, storage, and processing mechanisms compared to VARCHAR2. The document provides an overview of CLOB architecture including data access, internal storage, caching, logging, and indexing. It emphasizes that properly understanding CLOBs is important when storing and manipulating JSON data in Oracle databases.
The document discusses calling user-defined functions within SQL statements. It notes that functions may be called multiple times depending on the structure of the SQL statement. Functions in the SELECT and WHERE clauses of a query will be called independently for each row. Functions in an ORDER BY clause may also be called twice if an inline view or view is used due to query rewrite. The number of function calls can be tracked using a package to inspect execution.
Hidden Gems of Performance Tuning: Hierarchical Profiler and DML Trigger Opti...Michael Rosenblum
In any large ecosystem, there are always areas that stay in the twilight, outside of the public’s attention. This deep dive attempts to change the trend regarding two, at first glance, unrelated PL/SQL topics: hierarchical profiler (HProf) and database triggers. But if you look closer, there’s something in common: they’re significantly underused! HProf because nobody heard about it, database triggers because of decades-old stigma. Let’s put both of them back into our development toolset!
Part #1. One of the most critical FREE SQL and PL/SQL performance tuning tools is almost totally unknown! If you ask, how much time is spent on routine A? How often is function B called? Most developers would hand-code something instead of using the Oracle PL/SQL HProf. This isn’t because the provided functionality is disliked, but because developers aren’t aware of its existence! This presentation is an attempt to alter this trend and reintroduce HProf to a wider audience.
Part #2. There isn’t anything “evil” about database triggers; they just have to be used where they can actually solve problems. In this presentation, various kinds of triggers will be examined from a global system optimization view, including tradeoffs between multiple goals (e.g., depending upon the available hardware, developers can select either CPU-intensive or I/O-intensive solutions). This presentation will focus on the most common performance problems related to different kinds of DML triggers and the proper ways of resolving them.
This document discusses techniques for detecting and preventing SQL injection using the Percona Toolkit and Noinject!. It begins by introducing SQL injection and how attackers can modify SQL queries without changing server code. It then discusses using query fingerprints to detect new queries that may indicate injection attempts. The Percona Toolkit tools pt-query-digest and pt-fingerprint are used to generate and store fingerprints in a whitelist. Pt-query-digest can detect new fingerprints that have not been reviewed. The Noinject! proxy script uses fingerprints to inspect queries in real-time and block any that do not match whitelisted patterns. The document concludes by discussing limitations and ways to improve the fingerprinting approach.
The document discusses how the Oracle optimizer can sometimes choose suboptimal execution plans, leading to performance deterioration. It presents a scenario where the same query runs much slower when bind variables are used. The document then shows how SQL profiles can be used to enforce a better execution plan. It argues that manually creating profiles is not ideal for 24/7 environments. The document proposes using machine learning for outlier detection to identify performance issues and then automatically generate SQL profiles to address the issues. Code examples are provided for outlier detection and generating profiles through the Oracle API to allow automating the process.
The document provides guidance on optimizing PL/SQL code performance. It discusses avoiding unnecessary row-by-row processing, nested row-by-row processing, and excessive access to the DUAL table. Instead, it recommends performing set-based operations using SQL and caching frequently accessed values in memory to reduce database hits. The document also covers reducing excessive function calls and unnecessary parsing through techniques like result caching and inline views.
This presentation is the attempt to switch sides and show code management from the developer's point of view. It stays outside of various VCS solutions and focuses on hands-on approaches: activity control via system triggers, conditional compilation, synonym manipulation, utilization of Edition-Based Redefinition (EBR).
The document discusses views in Oracle databases and how they have evolved beyond simple stored SQL queries. Views can now serve as an isolation layer between applications and tables, accept DML operations directly or through triggers, and include complex functionality through features like parameterized conditions, dynamic SQL, and INSTEAD OF triggers. The document outlines techniques for optimizing DML operations on views, such as using dynamic SQL to only update changed columns, and leveraging compound triggers for shared program logic. It also warns of performance issues that can arise from logical primary keys on views.
Managing Unstructured Data: Lobs in the World of JSONMichael Rosenblum
This document discusses managing unstructured JSON data in Oracle databases. It describes how a company initially stored JSON files in VARCHAR2 columns, but then the files grew larger than 4000 characters requiring a change to CLOB storage. This change caused issues until developers understood that CLOBs have different access, storage, and processing mechanisms compared to VARCHAR2. The document provides an overview of CLOB architecture including data access, internal storage, caching, logging, and indexing. It emphasizes that properly understanding CLOBs is important when storing and manipulating JSON data in Oracle databases.
The document discusses calling user-defined functions within SQL statements. It notes that functions may be called multiple times depending on the structure of the SQL statement. Functions in the SELECT and WHERE clauses of a query will be called independently for each row. Functions in an ORDER BY clause may also be called twice if an inline view or view is used due to query rewrite. The number of function calls can be tracked using a package to inspect execution.
Hidden Gems of Performance Tuning: Hierarchical Profiler and DML Trigger Opti...Michael Rosenblum
In any large ecosystem, there are always areas that stay in the twilight, outside of the public’s attention. This deep dive attempts to change the trend regarding two, at first glance, unrelated PL/SQL topics: hierarchical profiler (HProf) and database triggers. But if you look closer, there’s something in common: they’re significantly underused! HProf because nobody heard about it, database triggers because of decades-old stigma. Let’s put both of them back into our development toolset!
Part #1. One of the most critical FREE SQL and PL/SQL performance tuning tools is almost totally unknown! If you ask, how much time is spent on routine A? How often is function B called? Most developers would hand-code something instead of using the Oracle PL/SQL HProf. This isn’t because the provided functionality is disliked, but because developers aren’t aware of its existence! This presentation is an attempt to alter this trend and reintroduce HProf to a wider audience.
Part #2. There isn’t anything “evil” about database triggers; they just have to be used where they can actually solve problems. In this presentation, various kinds of triggers will be examined from a global system optimization view, including tradeoffs between multiple goals (e.g., depending upon the available hardware, developers can select either CPU-intensive or I/O-intensive solutions). This presentation will focus on the most common performance problems related to different kinds of DML triggers and the proper ways of resolving them.
This document discusses techniques for detecting and preventing SQL injection using the Percona Toolkit and Noinject!. It begins by introducing SQL injection and how attackers can modify SQL queries without changing server code. It then discusses using query fingerprints to detect new queries that may indicate injection attempts. The Percona Toolkit tools pt-query-digest and pt-fingerprint are used to generate and store fingerprints in a whitelist. Pt-query-digest can detect new fingerprints that have not been reviewed. The Noinject! proxy script uses fingerprints to inspect queries in real-time and block any that do not match whitelisted patterns. The document concludes by discussing limitations and ways to improve the fingerprinting approach.
The document discusses how the Oracle optimizer can sometimes choose suboptimal execution plans, leading to performance deterioration. It presents a scenario where the same query runs much slower when bind variables are used. The document then shows how SQL profiles can be used to enforce a better execution plan. It argues that manually creating profiles is not ideal for 24/7 environments. The document proposes using machine learning for outlier detection to identify performance issues and then automatically generate SQL profiles to address the issues. Code examples are provided for outlier detection and generating profiles through the Oracle API to allow automating the process.
The document provides guidance on optimizing PL/SQL code performance. It discusses avoiding unnecessary row-by-row processing, nested row-by-row processing, and excessive access to the DUAL table. Instead, it recommends performing set-based operations using SQL and caching frequently accessed values in memory to reduce database hits. The document also covers reducing excessive function calls and unnecessary parsing through techniques like result caching and inline views.
The document provides instructions for logging into SQL*Plus, executing SQL statements, installing SQL Developer, browsing database objects, using the SQL worksheet, using PL/SQL in SQL Developer, creating reports, SQL*Plus file commands, and finding additional self-help tutorials. It includes steps for downloading and installing SQL Developer, selecting data from a sample customers table, and saving SQL scripts and output using commands like SAVE, GET, START, @, EDIT, and SPOOL.
The document discusses various PL/SQL programming concepts including PL/SQL block structure, procedures, functions, packages, cursors, exceptions, and dependencies. It provides guidelines for proper naming conventions, restrictions on calling functions from SQL expressions, and best practices for cursor and package design. The document also covers object types, subtypes, and working with collections in PL/SQL.
This document provides an overview of advanced PL/SQL concepts such as flow control, bulk processing, Oracle hints, and resources. It discusses techniques for optimizing PL/SQL code through improved loop and conditional logic. Bulk processing using FORALL is described as enabling set-based operations. Oracle hints are introduced as a way to suggest execution plans to the optimizer. Parallel query is explained as a way to improve performance on multi-processor systems. Finally, resources for further reading are listed.
The document discusses virtual indexes and columns in Oracle. Virtual indexes do not require disk space and time for creation like physical indexes. They can be used to test query execution plans without impacting the system. The document shows how to create a virtual index on the NO_FISICO column of the MOVTO_H table and use it to improve a query. It also discusses calculating statistics for a virtual index and using virtual columns to add calculated or derived columns to a table without changing the table definition.
Oracle 11g new features for developersScott Wesley
Abstract: There are a wealth of new features available in the 11g database release. This presentation touches on SQL & PL/SQL features I found of interest, and concentrates particularly on virtual columns.
Relevant scripts found at my blog
http://grassroots-oracle.com/2009/07/presentations.html#11gNewFeatures
This document discusses randomization using SystemVerilog. It begins by introducing constraint-driven test generation and random testing. It explains that SystemVerilog allows specifying constraints in a compact way to generate random values that meet the constraints. The document then discusses using objects to model complex data types for randomization. It provides examples of using SystemVerilog functions like $random, $urandom, and $urandom_range to generate random numbers. It also discusses constraining randomization using inline constraints and randomizing objects with the randomize method.
By using specially crafted parameters in double quotes, it is possible to bypass the input validation of the Oracle dbms_assert package and inject SQL code. This allows dozens of already patched Oracle vulnerabilities to be exploited again across versions 8.1.7.4 to 10.2.0.2. The researcher notified Oracle of the problem in April 2006. To mitigate risks, privileges like CREATE PROCEDURE should be revoked to prevent injection of malicious functions or procedures.
This document contains a Java practical file belonging to Rachit Gupta, an MCA student. It consists of 16 programs of varying complexity written in Java, along with the output of each program. The programs cover topics such as calculating the square root of a number, finding the perimeter of a rectangle, calculating percentage of marks, and generating an electric bill based on units consumed. The file is a submission of Rachit Gupta's Java practical assignments for his 4th semester MCA course at the University of Jammu.
The document provides templates and examples for creating Swing-based GUI applications, servlets, Java Server Pages (JSP), Java Database Connectivity (JDBC), Java Server Faces (JSF), Enterprise Java Beans (EJB), Hibernate, Struts, and web services in Java. It includes templates for common GUI components, servlets, JSP tags, database queries, managed beans, navigation rules, entity beans, Hibernate mappings, actions, and web service providers/consumers.
The document discusses different ways to implement threading in Java programs. It provides code examples to demonstrate creating threads by extending the Thread class and implementing the Runnable interface. The code examples show printing output from both the main thread and child threads to illustrate threading concepts. Socket programming and RMI examples are also provided with code to implement client-server applications using threads.
If you thought Monads are a mystery, then this demonstration would show you how to evolve your code towards a Monad without knowing about it. This demo will neither go into any Category Theory nor begin with monadic laws. Instead, we will start with typical code that you see in your daily life as a developer, attempt to DRY (Don't Repeat Yourself) it up and eventually use Monad to remove duplication and verbosity. You'll also see how Monads make your code more declarative and succinct by sequencing the steps in your domain logic.
Also, we know in Java8 Checked Exceptions + λ == Pain! To be more precise, we will evolve a Try<t> (exception handling monad) which is missing in Java8, similar to one found in Scala.
Currying and Partial Function Application (PFA)Dhaval Dalal
We look at Currying and Partial Function Application (PFA) in Functional Programming. Languages like Clojure don't have currying, but PFA, where has Haskell currying and not PFA, whereas Scala has both, Groovy wants you to call methods like curry() and rcurry(). In OO paradigm, we use DI (dependency Injection) and we will see how this is automatically subsumed using Currying and PFA.
The Ring programming language version 1.5.2 book - Part 9 of 181Mahmoud Samir Fayed
Here are the key things added in Ring 1.5 for tracing functions:
- RingVM_SetTrace() allows setting a function to be called on trace events. This function will receive information about the trace.
- RingVM_TraceData() returns an array with details of the current execution context like line number, file name, function name etc.
- RingVM_TraceEvent() returns the type of trace event, like new line, new function, return etc.
- Additional functions provide the current trace function name, ability to evaluate code in a specific scope, and control error handling during tracing.
This allows implementing a tracing function to log or print details at each step of execution. The example shows
This document describes the dw::Runtime module in DataWeave, which contains functions that allow interaction with the DataWeave engine. It defines several functions including fail, failIf, locationString, orElse, orElseTry, prop, and props. These functions allow throwing exceptions, conditional failure, getting location strings, chaining try blocks, retrieving properties, and more. Examples are provided for each function to demonstrate their usage.
The document provides information on arrays in Java programming:
1. Arrays allow storing multiple values of the same type in a single variable through contiguous memory locations. One-dimensional and multi-dimensional arrays are covered.
2. Sample code is provided to demonstrate declaring and initializing a one-dimensional integer array, calculating the sum of elements, and accepting input from the user to populate the array.
3. Another sample shows transposing a 2D array, with code to input values, store the original and transposed arrays, and output the transposed array.
Exercises are provided before, during and after the lab session to practice array concepts.
This document provides an overview of SQL procedural language (SQL PL) programming tips for DB2 stored procedures on z/OS. It discusses topics like when to use native SQL procedures, benefits of templates, compound blocks, dynamic SQL, XML support, and new features in DB2 11 like array data types and global variables. The document is intended for application developers to help simplify applications using SQL PL procedures.
Triggers are stored PL/SQL blocks that are associated with a table, view, schema or database and execute automatically when a triggering event occurs. There are two types of triggers: application triggers that fire on application events and database triggers that fire on data or system events. Triggers can be used to centralize global operations, perform related actions, enforce complex integrity constraints and compute derived values automatically. The timing of a trigger determines whether it executes before or after the triggering event.
- Java RMI allows methods to be called remotely between JVMs on different hosts.
- For an interface to be remote, it must extend the Remote interface. Remote objects implement remote interfaces and extend UnicastRemoteObject.
- Primitive types are passed by value between remote systems, while non-remote objects are serialized and passed by value with references also serialized. Remote objects are passed as remote references called stubs.
Great local places, an urban design initiative for local governments by Micha...MMcKplandesign
Summarising an excellent urban design programme I've been working on with Toowoomba Regional Council, and also some research into urban design skills and policy in South East Queensland
When performance issues arise, developers often blame the database, while DBAs are quick to blame developers. If all else fails, the network is the culprit. Most systems have many parts managed by multiple entities within an organization. This session explores how to improve system quality by proper monitoring of user activity rather than server activity. Without an overall architectural approach to performance tuning, any aggregated statistics (CPU workload, communication speed, network latency, etc.) are meaningless unless you can explain to a user why a button click takes so much time. This session offers a coherent methodology for identifying performance issues, pinpointing common problem sources, and providing solutions.
The document provides instructions for logging into SQL*Plus, executing SQL statements, installing SQL Developer, browsing database objects, using the SQL worksheet, using PL/SQL in SQL Developer, creating reports, SQL*Plus file commands, and finding additional self-help tutorials. It includes steps for downloading and installing SQL Developer, selecting data from a sample customers table, and saving SQL scripts and output using commands like SAVE, GET, START, @, EDIT, and SPOOL.
The document discusses various PL/SQL programming concepts including PL/SQL block structure, procedures, functions, packages, cursors, exceptions, and dependencies. It provides guidelines for proper naming conventions, restrictions on calling functions from SQL expressions, and best practices for cursor and package design. The document also covers object types, subtypes, and working with collections in PL/SQL.
This document provides an overview of advanced PL/SQL concepts such as flow control, bulk processing, Oracle hints, and resources. It discusses techniques for optimizing PL/SQL code through improved loop and conditional logic. Bulk processing using FORALL is described as enabling set-based operations. Oracle hints are introduced as a way to suggest execution plans to the optimizer. Parallel query is explained as a way to improve performance on multi-processor systems. Finally, resources for further reading are listed.
The document discusses virtual indexes and columns in Oracle. Virtual indexes do not require disk space and time for creation like physical indexes. They can be used to test query execution plans without impacting the system. The document shows how to create a virtual index on the NO_FISICO column of the MOVTO_H table and use it to improve a query. It also discusses calculating statistics for a virtual index and using virtual columns to add calculated or derived columns to a table without changing the table definition.
Oracle 11g new features for developersScott Wesley
Abstract: There are a wealth of new features available in the 11g database release. This presentation touches on SQL & PL/SQL features I found of interest, and concentrates particularly on virtual columns.
Relevant scripts found at my blog
http://grassroots-oracle.com/2009/07/presentations.html#11gNewFeatures
This document discusses randomization using SystemVerilog. It begins by introducing constraint-driven test generation and random testing. It explains that SystemVerilog allows specifying constraints in a compact way to generate random values that meet the constraints. The document then discusses using objects to model complex data types for randomization. It provides examples of using SystemVerilog functions like $random, $urandom, and $urandom_range to generate random numbers. It also discusses constraining randomization using inline constraints and randomizing objects with the randomize method.
By using specially crafted parameters in double quotes, it is possible to bypass the input validation of the Oracle dbms_assert package and inject SQL code. This allows dozens of already patched Oracle vulnerabilities to be exploited again across versions 8.1.7.4 to 10.2.0.2. The researcher notified Oracle of the problem in April 2006. To mitigate risks, privileges like CREATE PROCEDURE should be revoked to prevent injection of malicious functions or procedures.
This document contains a Java practical file belonging to Rachit Gupta, an MCA student. It consists of 16 programs of varying complexity written in Java, along with the output of each program. The programs cover topics such as calculating the square root of a number, finding the perimeter of a rectangle, calculating percentage of marks, and generating an electric bill based on units consumed. The file is a submission of Rachit Gupta's Java practical assignments for his 4th semester MCA course at the University of Jammu.
The document provides templates and examples for creating Swing-based GUI applications, servlets, Java Server Pages (JSP), Java Database Connectivity (JDBC), Java Server Faces (JSF), Enterprise Java Beans (EJB), Hibernate, Struts, and web services in Java. It includes templates for common GUI components, servlets, JSP tags, database queries, managed beans, navigation rules, entity beans, Hibernate mappings, actions, and web service providers/consumers.
The document discusses different ways to implement threading in Java programs. It provides code examples to demonstrate creating threads by extending the Thread class and implementing the Runnable interface. The code examples show printing output from both the main thread and child threads to illustrate threading concepts. Socket programming and RMI examples are also provided with code to implement client-server applications using threads.
If you thought Monads are a mystery, then this demonstration would show you how to evolve your code towards a Monad without knowing about it. This demo will neither go into any Category Theory nor begin with monadic laws. Instead, we will start with typical code that you see in your daily life as a developer, attempt to DRY (Don't Repeat Yourself) it up and eventually use Monad to remove duplication and verbosity. You'll also see how Monads make your code more declarative and succinct by sequencing the steps in your domain logic.
Also, we know in Java8 Checked Exceptions + λ == Pain! To be more precise, we will evolve a Try<t> (exception handling monad) which is missing in Java8, similar to one found in Scala.
Currying and Partial Function Application (PFA)Dhaval Dalal
We look at Currying and Partial Function Application (PFA) in Functional Programming. Languages like Clojure don't have currying, but PFA, where has Haskell currying and not PFA, whereas Scala has both, Groovy wants you to call methods like curry() and rcurry(). In OO paradigm, we use DI (dependency Injection) and we will see how this is automatically subsumed using Currying and PFA.
The Ring programming language version 1.5.2 book - Part 9 of 181Mahmoud Samir Fayed
Here are the key things added in Ring 1.5 for tracing functions:
- RingVM_SetTrace() allows setting a function to be called on trace events. This function will receive information about the trace.
- RingVM_TraceData() returns an array with details of the current execution context like line number, file name, function name etc.
- RingVM_TraceEvent() returns the type of trace event, like new line, new function, return etc.
- Additional functions provide the current trace function name, ability to evaluate code in a specific scope, and control error handling during tracing.
This allows implementing a tracing function to log or print details at each step of execution. The example shows
This document describes the dw::Runtime module in DataWeave, which contains functions that allow interaction with the DataWeave engine. It defines several functions including fail, failIf, locationString, orElse, orElseTry, prop, and props. These functions allow throwing exceptions, conditional failure, getting location strings, chaining try blocks, retrieving properties, and more. Examples are provided for each function to demonstrate their usage.
The document provides information on arrays in Java programming:
1. Arrays allow storing multiple values of the same type in a single variable through contiguous memory locations. One-dimensional and multi-dimensional arrays are covered.
2. Sample code is provided to demonstrate declaring and initializing a one-dimensional integer array, calculating the sum of elements, and accepting input from the user to populate the array.
3. Another sample shows transposing a 2D array, with code to input values, store the original and transposed arrays, and output the transposed array.
Exercises are provided before, during and after the lab session to practice array concepts.
This document provides an overview of SQL procedural language (SQL PL) programming tips for DB2 stored procedures on z/OS. It discusses topics like when to use native SQL procedures, benefits of templates, compound blocks, dynamic SQL, XML support, and new features in DB2 11 like array data types and global variables. The document is intended for application developers to help simplify applications using SQL PL procedures.
Triggers are stored PL/SQL blocks that are associated with a table, view, schema or database and execute automatically when a triggering event occurs. There are two types of triggers: application triggers that fire on application events and database triggers that fire on data or system events. Triggers can be used to centralize global operations, perform related actions, enforce complex integrity constraints and compute derived values automatically. The timing of a trigger determines whether it executes before or after the triggering event.
- Java RMI allows methods to be called remotely between JVMs on different hosts.
- For an interface to be remote, it must extend the Remote interface. Remote objects implement remote interfaces and extend UnicastRemoteObject.
- Primitive types are passed by value between remote systems, while non-remote objects are serialized and passed by value with references also serialized. Remote objects are passed as remote references called stubs.
Great local places, an urban design initiative for local governments by Micha...MMcKplandesign
Summarising an excellent urban design programme I've been working on with Toowoomba Regional Council, and also some research into urban design skills and policy in South East Queensland
When performance issues arise, developers often blame the database, while DBAs are quick to blame developers. If all else fails, the network is the culprit. Most systems have many parts managed by multiple entities within an organization. This session explores how to improve system quality by proper monitoring of user activity rather than server activity. Without an overall architectural approach to performance tuning, any aggregated statistics (CPU workload, communication speed, network latency, etc.) are meaningless unless you can explain to a user why a button click takes so much time. This session offers a coherent methodology for identifying performance issues, pinpointing common problem sources, and providing solutions.
NYU Class: Web Architecture and Content Creation ProjectLin Davis
NYU Class Project-(Publishing: M.S.)
Scope: Create a presentation, focusing around a Website redesign with suggested content, marketing plan, blog, wire-frame and mission statement.
Dark Pink: Planning Adelaide's Medium Density Future - What can SA learn from...MMcKplandesign
Michael McKeown, a senior urban planner and designer, gave a seminar on planning Adelaide's medium-density future by expanding housing options between low-density suburbs and high-rise apartments. He discussed how medium-density housing, such as townhouses and low-rise apartments, can provide more affordable and diverse housing choices while maintaining a sense of community.
This document provides information about senior services that promote emotional, social, and physical well-being for older adults. It discusses common home issues for seniors such as safety hazards, maintenance troubles, rising costs, and accessibility needs. Solutions outlined include staff assistance, low-cost modifications for eligible homeowners, disability upgrades for renters, and guidelines for plumbing, carpentry, electrical, and home modification work. Income eligibility guidelines are also listed along with a phone number for further information and assistance.
Surat izin belajar diberikan kepada Dodi Sutejo S.Sos, staf Biro Keuangan Setda Provinsi Riau untuk mengikuti program Magister Sains Manajemen di Program Pasca Sarjana UNRI. Surat izin ini diberikan oleh Kepala Bagian Anggaran Daerah Drs. H. Mohd. Roem, MP guna meningkatkan kualitas sumber daya manusia di Biro Keuangan.
להגיש הצעות מחיר כמו מקצוענים - מסעות פרסום משופרים בגוגל אדוורדסעידן שלומן
לקבלת פרטים נוספים על פרסום בגוגל אדוורדס הכנסו ל - http://www.signup.co.il/
להגיש הצעות מחיר כמו מקצוענים, מסעות פרסום משופרים בגוגל אדוורדס. הספר נכתב על ידי צוות גוגל ישראל.
Planning is not enough. Talk given at the Planning Institute of Australia's ...MMcKplandesign
Short talk given at the Planning Institute of Australia's national congress in Canberra on 26th March 2013.
The topic is implementation really. More doing, less talking. Well, more doing anyway. Includes profound quotes from well known town planning commentators Kevin McCloud and... Nick Cave.
Designing an urban extension for 70,000 people and 17,000 jobs in South East Queensland. Talk to 7th International Urban Design Conference, Adelaide, South Australia 2 September 2014
Understanding Query Optimization with ‘regular’ and ‘Exadata’ OracleGuatemala User Group
The document discusses query optimization with regular Oracle databases and Exadata databases. It explains what happens when a SQL statement is issued, including parsing, optimization, and execution. It describes what an execution plan is and how it can be generated and displayed. It discusses how operations can be offloaded to storage cells on Exadata and factors the optimizer considers for determining a good execution plan.
The document summarizes how SQL Plan Directives in Oracle 12c can help address issues caused by cardinality misestimation in the optimizer. It provides an example where the optimizer underestimates the number of rows returned by a query on a table due to not having statistics on correlated columns. In 12c, a SQL Plan Directive is automatically generated after the first execution to capture this misestimation. On subsequent queries, the directive can be used to provide more accurate cardinality estimates through automatic reoptimization or dynamic sampling.
New Tuning Features in Oracle 11g - How to make your database as boring as po...Sage Computing Services
One of the key problems that have haunted Oracle sites since the introduction of the cost based optimiser is the ability to provide a stable level of performance over time. The very responsiveness of the CBO to factors such as changes in statistics and initialisation parameters can lead to sudden changes in performance levels. Oracle 11g is set to introduce a number of features that will assist the DBA in providing a stable environment for mission critical applications. Excitement is for out of work time, (and for developers). The aim of most database administrators is to have as boring a working life as possible. Oracle 11g may help us achieve those aims.
This presentation discusses some of those features including:
Capture and replay of workload
Automatic SGA tuning
Managing and fixing plans
The 11g Automatic Tuning Advisor
Day 1 of the training covers introductory C++ concepts like object-oriented programming, compilers, IDEs, classes, objects, and procedural programming concepts. Day 2 covers more advanced class concepts like constructors, destructors, static members, returning objects, and arrays of objects. Day 3 covers function and operator overloading.
This document discusses various Oracle SQL concepts including query optimization, execution plans, joins, indexes, and full table scans. It provides guidance on understanding how Oracle processes and executes SQL queries, the importance of statistics and selectivity, and techniques for writing efficient queries such as predicate pushing and query transformations. The goal is to help readers gain a conceptual understanding of Oracle's internals to formulate more efficient SQL.
[Pgday.Seoul 2019] Citus를 이용한 분산 데이터베이스PgDay.Seoul
This document summarizes how to set up and use Citus, an open-source PostgreSQL-based distributed database. It explains how to install Citus, add worker nodes, create distributed tables, and use features like reference tables to perform distributed queries across the cluster.
The document discusses PAPI (Performance API), a tool for collecting hardware performance counter data from processors. PAPI provides a consistent interface for accessing performance counters across platforms and defines platform-neutral events. It supports many modern processors and operating systems and comes with utilities for collecting, analyzing, and visualizing performance data. An example shows how reordering loops in a matrix multiplication algorithm can improve data cache and TLB behavior.
This document discusses features of Oracle Database 12c related to auditing and tracking changes over time. It summarizes that Oracle 12c includes flashback data archive, which allows viewing or restoring data to a previous state. This feature can be used for auditing and tracking changes made to database tables. The document also discusses how Oracle 12c captures additional context metadata with each change, including user, host, and program used, allowing more detailed tracking of changes than prior releases.
This paper describes the evolution of the Plan table and DBMSX_PLAN in 11g and some of the features that can be used to troubelshoot SQL performance effectively and efficiently.
Create a JAVA program that performs file IO and database interaction.pdfmalavshah9013
Create a JAVA program that performs file IO and database interaction via SQL. The program
needs to read data from the provided file: \"Project.csv\" and insert the data into a database. Then
the program needs to create a report from the database sorted by price descending. The report
should be in the format demonstrated below.
id (primary key - generated by the database)
cpuname
performance
price
Project.csv contents:CPU NamePerformancePrice (USD)Intel Core i7-3770K @
3.50GHz9,556$560.50Intel Core i7-3770 @ 3.40GHz9,327$335.55Intel Core i7-3820 @
3.60GHz8,990$404.38AMD FX-8350 Eight-Core8,940$149.99Intel Core i7-2600K @
3.40GHz8,501$379.97Intel Core i7-2600 @ 3.40GHz8,242$214.99Intel Core i7-4720HQ @
2.60GHz8,046NAAMD FX-8320 Eight-Core8,008$145.99Intel Core i7-6700HQ @
2.60GHz7,997$1509Intel Core i7-4710HQ @ 2.50GHz7,826NAIntel Core i5-6600K @
3.50GHz7,762$239.99Intel Core i7-4700HQ @ 2.40GHz7,754$383.00Intel Core i7-4700MQ
@ 2.40GHz7,736$467.40Intel Core i5-4690K @ 3.50GHz7,690$239.99AMD FX-8150 Eight-
Core7,619$165.99Intel Core i7-3630QM @ 2.40GHz7,604$304.49Intel Core i5-4670K @
3.40GHz7,598$249.99Intel Core i5-4690 @ 3.50GHz7,542$224.99Intel Core i7-3610QM @
2.30GHz7,460$399.99Intel Core i5-4670 @ 3.40GHz7,342$226.99Intel Core i5-4590 @
3.30GHz7,174$199.99Intel Core i7-4702MQ @ 2.20GHz7,146NAIntel Core i5-3570K @
3.40GHz7,130$477.23
Solution
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Arrays;
public class Main
{
/**
* This the main function that runs at the start
* param args - input arguments from the command line
*/
static public void main(String[] args)
{
CPUList cpuList = new CPUList(); //The CPUList used to retrieve data from the
fiile and store in the db
CPUList cpuListRetrieved = new CPUList(); //The CPUList used to retrieve data from the
database
CpuDb cpuDb = new CpuDb(); //The database object used to move data to and
from the CPU Lists
try
{
//Read in the file and store each line into the CPU objects in a list
Files.lines(Paths.get(\"Project04Data.csv\"))
.map(line -> line.split(\"\ \ \")) // Stream
.flatMap(Arrays::stream) // Stream
.forEach(line -> cpuList.AddCpu(line));
//Clear the list table for the new listing
cpuDb.Clear();
//Insert the Cpu List into the database
cpuDb.SetCpuList(cpuList);
//Retrieve the Cpu List into a different CPU List object from the database
cpuDb.GetCpuList(cpuListRetrieved);
//Show the report from the new list that was retrieved from the database
cpuListRetrieved.ShowReport();
} catch (IOException e)
{
e.printStackTrace();
}
}
}
CPUList.java
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
public class CPUList
{
ArrayList theList = new ArrayList<>();
/**
* Default constructor for the CPU
*/
public void CPUList()
{
}
/**
* param strInputLine Input line to be used in creating the CPU object
*/
public void AddCpu(String strInputLine)
{
theList.add(new CPU(strInputLine));
}
/**
* param tempCPU - A CPU object to.
Performance is a feature! - London .NET User GroupMatt Warren
Starting with the premise that "Performance is a Feature", this session will look at how to measure, what to measure and how get the best performance from your .NET code.
We will look at real-world examples from the Roslyn code-base and StackOverflow (the product), including how the .NET Garbage Collector needs to be tamed!
We are all told that we must use bind variables rather than literals in our code, and then are left to deal with the problems this causes. This issue probably still causes more performance tuning problems than any other. This presentation discusses how Oracle has handled the optimisation of statements using bind variables from version 8i to the new features in Oracle 11g and highlights some issues that still exist in version 11g.
Managing Statistics for Optimal Query PerformanceKaren Morton
Half the battle of writing good SQL is in understanding how the Oracle query optimizer analyzes your code and applies statistics in order to derive the “best” execution plan. The other half of the battle is successfully applying that knowledge to the databases that you manage. The optimizer uses statistics as input to develop query execution plans, and so these statistics are the foundation of good plans. If the statistics supplied aren’t representative of your actual data, you can expect bad plans. However, if the statistics are representative of your data, then the optimizer will probably choose an optimal plan.
This document provides examples of using different format parameters with the DBMS_XPLAN.DISPLAY_CURSOR procedure to customize the output. Key information displayed includes execution statistics, predicates, projections, outlines, and indications of adaptive plans.
The document describes linking and accelerating programs in the TNS/E environment. It discusses using the eld linker to link multiple modules into an executable, how to handle unresolved symbols, and how to create dynamic link libraries (DLLs). It also covers using the Object Code Accelerator (OCA) to optimize Guardian code files for the Itanium architecture, and tools like enoft and fileinfo for examining object files and determining if a program has been accelerated.
The document discusses several new features and enhancements in Oracle Database 11g Release 1. Key points include:
1) Encrypted tablespaces allow full encryption of data while maintaining functionality like indexing and foreign keys.
2) New caching capabilities improve performance by caching more results and metadata to avoid repeat work.
3) Standby databases have been enhanced and can now be used for more active purposes like development, testing, reporting and backups while still providing zero data loss protection.
The document discusses new features in Oracle Database 11g Release 1. Key points include:
1. Encrypted tablespaces allow encryption of data at the tablespace level while still supporting indexing and queries.
2. New caching capabilities improve performance by caching more results in memory, such as function results and query results.
3. Standby databases have enhanced capabilities and can now be used for more active purposes like development, testing and reporting for increased usability and value.
MySQLinsanity! This document provides an overview of Stanley Huang's MySQL performance tuning experience and expertise. It begins with introductions and background on Stanley Huang. It then discusses the typical phases of MySQL performance tuning projects, including SQL tuning and RDBMS tuning. Specific tips are provided around topics like slow query logging, index usage, partitioning, and server configuration. The document concludes with an invitation for questions.
Similar to The Hidden Face of Cost-Based Optimizer: PL/SQL Specific Statistics (20)
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
The Hidden Face of Cost-Based Optimizer: PL/SQL Specific Statistics
1. 1 of 44
The Hidden Face
of the Cost Based Optimizer:
PL/SQL-Specific Statistics
[UGF2781]
Michael Rosenblum
Dulcian, Inc.
www.dulcian.com
2. 2 of 44
Who Am I? – “Misha”
Oracle ACE
Co-author of 3 books
PL/SQL for Dummies
Expert PL/SQL Practices
Oracle PL/SQL Performance Tuning Tips & Techniques
(Rosenblum & Dorsey, Oracle Press, July 2014)
Won ODTUG 2009 Speaker of the Year
Known for:
SQL and PL/SQL tuning
Complex functionality
Code generators
Repository-based development
3. 3 of 44
Did you know that…?
User-defined functions have a number of statistics
associated with them.
These statistics impact decisions made by the Cost Based
Optimizer (CBO).
Default values of these statistics are …well…less than
adequate.
… but you can adjust them manually!
4. 4 of 44
Defaults
Hardware resources
CPU cost – 3000 [CPU instructions]
I/O cost – 0 [data blocks to be read/written]
Network cost – 0 [data blocks to be read/written]
Cardinality – 8168 [rows]
Selectivity – 1% [out of total set]
6. 6 of 44
Basic Case
Problem:
There are two functions in SQL statement.
You want to tell CBO that one of them is expensive.
Solution:
ASSOCIATE STATISTICS WITH FUNCTIONS f_light_tx
DEFAULT COST (0,0,0) /* CPU,IO,Network */; -- light
ASSOCIATE STATISTICS WITH FUNCTIONS f_heavy_tx
DEFAULT COST (99999,99999,99999); -- heavy
7. 7 of 44
Impact
SQL> set autotrace on explain
SQL> SELECT count(*) FROM emp
2 WHERE f_heavy_tx(deptno) = 'A' OR f_light_tx(empno) = 'B';
COUNT(*)
----------
0
Execution Plan
-----------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
---------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 |
| 1 | SORT AGGREGATE | | 1 | 7 |
|* 2 | TABLE ACCESS FULL| EMP | 1 | 7 |
---------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("F_LIGHT_TX"("EMPNO")='B‘ OR "F_HEAVY_TX"("DEPTNO")='A')
Change of execution
order
8. 8 of 44
Increasing Complexity
Problem:
Hardcoding high values is a cheat
… although, in some cases it may be enough.
Solution:
Get actual statistics
… if you can simulate a real world environment.
9. 9 of 44
Measuring Statistics – Sample Function
CREATE FUNCTION f_getdeptinfo_tx (i_deptno NUMBER)
RETURN VARCHAR2 IS
v_out_tx VARCHAR2(256);
BEGIN
SELECT dname
INTO v_out_tx
FROM scott.dept@remoteDB
WHERE deptno = i_deptno;
SELECT v_out_tx||':'||count(*)
INTO v_out_tx
FROM scott.emp
WHERE deptno = i_deptno;
RETURN v_out_tx;
END;
Remote call
10. 10 of 44
Measuring Statistics - Snapshot Before
SQL> SELECT f_getdeptinfo_tx (10) FROM DUAL;
SQL> SELECT a.name, b.value
2 FROM v$statname a, v$mystat b
3 WHERE a.statistic# = b.statistic#
4 AND name IN ('db block gets', -- physical reads
5 'consistent gets', -- logical reads
6 'CPU used by this session', -- CPU
7 'bytes sent via SQL*Net to dblink', -- DB-link
8 'bytes received via SQL*Net from dblink' -- DB-link
9 );
NAME VALUE
-------------------------------------- ----------
CPU used by this session 9
db block gets 12
consistent gets 226
bytes sent via SQL*Net to dblink 3459
bytes received via SQL*Net from dblink 4070
Cause parse
and ignore
11. 11 of 44
Measuring Statistics - Snapshot After
SQL> SELECT f_getdeptinfo_tx (10) FROM DUAL;
SQL> SELECT a.name, b.value
2 FROM v$statname a, v$mystat b
3 WHERE a.statistic# = b.statistic#
4 AND name IN ('db block gets', -- physical reads
5 'consistent gets', -- logical reads
6 'CPU used by this session', -- CPU
7 'bytes sent via SQL*Net to dblink', -- DB-link
8 'bytes received via SQL*Net from dblink' -- DB-link
9 );
NAME VALUE
-------------------------------------- ----------
CPU used by this session 11 [was 9]
db block gets 12 [was 12]
consistent gets 232 [was 226]
bytes sent via SQL*Net to dblink 4113 [was 3459]
bytes received via SQL*Net from dblink 4603 [was 4070]
Real call
12. 12 of 44
Real Numbers
Difference:
CPU time = 2 hs
I/O = 6 blocks
Physical Reads = 0 blocks
Logical Reads = 6 blocks
Network = 2 blocks
Sent via DBLink = 654 bytes ~ 1 block
Received via DBLink = 533 bytes ~ 1 block
Needs translation
13. 13 of 44
CPU Time Format Conversion
Convert CPU time into CPU instructions:
SQL> DECLARE
2 v_units_nr NUMBER;
3 v_time_nr NUMBER:=0.02; -- time in seconds
4 BEGIN
5 v_units_nr:=
6 DBMS_ODCI.ESTIMATE_CPU_UNITS (v_time_nr)* 1000;
7 DBMS_OUTPUT.PUT_LINE
8 ('Instructions:'||round(v_units_nr));
9 END;
10 /
Instructions:18783086
Function output is in
thousands of instructions
14. 14 of 44
Final Step
Associate real statistics:
ASSOCIATE STATISTICS WITH FUNCTIONS f_getDeptInfo_tx
DEFAULT COST (
18783086, -- CPU instructions
6, -- local IO
2); -- network
16. 16 of 44
Problem
Task:
Multiple functions in the same package.
Need to associate different statistics with each of them.
Problem:
No syntax to hard code statistics using the
PACKAGE.FUNCTION format.
… but you can hardcode statistics to the whole package (i.e. all
functions will share the same numbers)
… via: ASSOCIATE STATISTICS WITH PACKAGES <name>
Solution:
ODCI object type interface!
17. 17 of 44
Sample Package
CREATE PACKAGE perf_pkg IS
FUNCTION f_heavy_tx (i_deptno NUMBER) RETURN VARCHAR2;
FUNCTION f_light_tx (i_empno NUMBER) RETURN VARCHAR2;
FUNCTION f_medium_tx (i_name VARCHAR2) RETURN VARCHAR2;
END;
CREATE OR REPLACE PACKAGE BODY perf_pkg is
FUNCTION f_heavy_tx (i_deptno NUMBER) RETURN VARCHAR2 IS
BEGIN RETURN 'heavy:'||i_deptno; END;
FUNCTION f_light_tx (i_empno NUMBER) RETURN VARCHAR2 IS
BEGIN RETURN 'light:'||i_empno; END;
FUNCTION f_medium_tx (i_name VARCHAR2) RETURN VARCHAR2 IS
BEGIN RETURN initcap(i_name); END;
END;
18. 18 of 44
Key Discovery
ODCI interface:
Does not care about names of function parameters, but does
care about datatypes:
You need to record all possible combinations of inputs.
In this case:
2 functions with NUMBER inputs
1 function with VARCHAR2 input
19. 19 of 44
Object Type (1)
CREATE OR REPLACE TYPE function_stat_oty AS OBJECT (
dummy_attribute NUMBER,
STATIC FUNCTION ODCIGetInterfaces (p_interfaces OUT sys.odciobjectlist)
RETURN NUMBER,
STATIC FUNCTION ODCIStatsFunctionCost
(p_func_info IN sys.odcifuncinfo,
p_cost OUT sys.odcicost,
p_args IN sys.odciargdesclist,
i_single_nr IN NUMBER,
p_env IN sys.odcienv) RETURN NUMBER,
STATIC FUNCTION ODCIStatsFunctionCost
(p_func_info IN sys.odcifuncinfo,
p_cost OUT sys.odcicost,
p_args IN sys.odciargdesclist,
i_single_tx IN varchar2,
p_env IN sys.odcienv) RETURN NUMBER
)
One function for each
datatype permutation
20. 20 of 44
Object Type (2)
CREATE OR REPLACE TYPE BODY function_stat_oty as
STATIC FUNCTION ODCIGetInterfaces
(p_interfaces OUT sys.odciobjectlist)
RETURN NUMBER IS
BEGIN
p_interfaces := sys.odciobjectlist(
sys.odciobject ('sys', 'odcistats2')
);
RETURN odciconst.success;
END odcigetinterfaces;
Required by ODCI
21. 21 of 44
Object Type (3)
STATIC FUNCTION ODCIStatsFunctionCost
(p_func_info IN sys.odcifuncinfo,
p_cost OUT sys.odcicost,
p_args IN sys.odciargdesclist,
i_single_nr IN NUMBER,
p_env IN sys.odcienv
) RETURN NUMBER IS
BEGIN
IF LOWER(p_func_info.methodname) LIKE '%heavy%' THEN
p_cost := sys.odcicost
(cpucost=>NULL,
iocost=>1000,
networkcost=>NULL,
indexcostinfo=>NULL);
END IF;
RETURN odciconst.success;
END;
Record type containing:
- ObjectSchema
- ObjectName – name of package
or standalone function
- MethodName – name of
packaged function
- Flags
22. 22 of 44
Object Type (4)
STATIC FUNCTION ODCIStatsFunctionCost
(p_func_info IN sys.odcifuncinfo,
p_cost OUT sys.odcicost,
p_args IN sys.odciargdesclist,
i_single_tx IN VARCHAR2,
p_env IN sys.odcienv
) RETURN NUMBER IS
BEGIN
IF LOWER(p_func_info.methodname) LIKE '%medium%' THEN
p_cost := sys.odcicost(NULL, 10, NULL, NULL);
END IF;
RETURN odciconst.success;
END;
END;
Second permutation
23. 23 of 44
Test Case
SQL> ASSOCIATE STATISTICS WITH PACKAGES perf_pkg
2 USING function_stat_oty;
SQL> SET AUTOTRACE ON EXPLAIN
SQL> SELECT count(*) FROM emp
2 WHERE perf_pkg.f_heavy_tx(empno)='A'
3 OR perf_pkg.f_light_tx(deptno)='B'
4 OR perf_pkg.f_medium_tx(job)='C';
COUNT(*)
----------
0
Starting from HEAVY
24. 24 of 44
Impact
Execution Plan
----------------------------------------------------------
Plan hash value: 2083865914
----------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | 13863 (0)|
| 1 | SORT AGGREGATE | | 1 | 15 | |
|* 2 | TABLE ACCESS FULL| EMP | 1 | 15 | 13863 (0)|
----------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("PERF_PKG"."F_LIGHT_TX"("DEPTNO")='B' OR
"PERF_PKG"."F_MEDIUM_TX"("JOB")='C' OR
"PERF_PKG"."F_HEAVY_TX"("EMPNO")='A')
… but CBO started
from LIGHT!
26. 26 of 44
Issue
Task:
Use Object Collections as a part of a SQL statement with the
TABLE clause.
Problem:
Oracle’s default cardinality of the collection causes the CBO
to make bad decisions.
27. 27 of 44
Test Case
-- create table
CREATE TABLE inlist_tab AS
SELECT object_id, created, object_type
FROM all_objects
WHERE object_id IS NOT NULL;
ALTER TABLE inlist_tab
ADD CONSTRAINT inlist_tab_pk PRIMARY KEY (object_id) USING INDEX;
BEGIN
dbms_stats.gather_table_stats(user,'INLIST_TAB');
END;
-- create object collection
CREATE TYPE id_tt IS TABLE OF NUMBER;
28. 28 of 44
Problem Illustration
SELECT /*+ gather_plan_statistics */ MAX(created)
FROM inlist_tab
WHERE object_id IN (SELECT t.column_value
FROM TABLE(id_tt(100,101)) t)
-- run DBMS_XPLAN.DISPLAY_CURSOR
-----------------------------------------------------------------------
|Id | Operation |Name |E-Rows|A-Rows
-----------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1
| 1 | SORT AGGREGATE | | 1| 1
|*2 | HASH JOIN | | 8168| 2
| 3 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168| 2
| 4 | TABLE ACCESS FULL |INLIST_TAB| 29885| 89761
-----------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("OBJECT_ID"=VALUE(KOKBF$))
Only 2 objects…
… cause full-table scan!
29. 29 of 44
Possible Options
Hints:
CARDINALITY hint – manual cardinality override
Pro: Simple
Con: Hardcoded
DYNAMIC_SAMPLING – let Oracle check the data
Pro: Avoid hard coding
Con: Involves extra SQL overhead, while PL/SQL already knows
how many objects are in the collection
30. 30 of 44
Impact of Hints
SELECT /*+ gather_plan_statistics */ MAX(created)
FROM inlist_tab
WHERE object_id IN (
SELECT /*+ dynamic_sampling (t 2) */ t.column_value
-- SELECT /*+ cardinality (t 2) */ t.column_value
FROM TABLE(id_tt(227011,227415)) t)
--------------------------------------------------------------------------
|Id|Operation |Name |E-Rows |A-Rows
--------------------------------------------------------------------------
| 0|SELECT STATEMENT | | | 1
| 1| SORT AGGREGATE | | 1 | 1
| 2| NESTED LOOPS | | | 2
| 3| NESTED LOOPS | | 2 | 2
| 4| COLLECTION ITERATOR CONSTRUCTOR FETCH| | 2 | 2
|*5| INDEX UNIQUE SCAN |INLIST_TAB_PK| 1 | 2
| 6| TABLE ACCESS BY INDEX ROWID |INLIST_TAB | 1 | 2
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - access("OBJECT_ID"=VALUE(KOKBF$))
Using index!
31. 31 of 44
+ Extra Option - ODCI
ODCI interface:
ODCIStatsTableFunction method
It can work only with functions
… so if you need to use a default constructor, you must create a
“transmitter” function that would have associated statistics:
CREATE OR REPLACE FUNCTION MyCard(i_tt id_tt)
RETURN id_tt IS
BEGIN
RETURN i_tt;
END;
32. 32 of 44
Object Type (1)
CREATE TYPE MyCard_OT AS OBJECT (
dummy_attribute NUMBER,
STATIC FUNCTION ODCIGetInterfaces
(p_interfaces OUT SYS.ODCIObjectList)
RETURN NUMBER,
STATIC FUNCTION ODCIStatsTableFunction (
p_function IN SYS.ODCIFuncInfo,
p_stats OUT SYS.ODCITabFuncStats,
p_args IN SYS.ODCIArgDescList,
i_tt IN id_tt)
RETURN NUMBER
); Object collection as input
33. 33 of 44
Object Type (2)
CREATE TYPE BODY MyCard_OT AS
STATIC FUNCTION ODCIGetInterfaces ...
STATIC FUNCTION ODCIStatsTableFunction
(p_function IN SYS.ODCIFuncInfo,
p_stats OUT SYS.ODCITabFuncStats,
p_args IN SYS.ODCIArgDescList,
i_tt IN id_tt) RETURN NUMBER IS
BEGIN
p_stats := SYS.ODCITabFuncStats(i_tt.COUNT);
RETURN ODCIConst.success;
END ODCIStatsTableFunction;
END;
Set
statistics
34. 34 of 44
Impact of Statistics
ASSOCIATE STATISTICS WITH FUNCTIONS MyCard USING mycard_ot;
SELECT /*+ gather_plan_statistics*/ MAX(created)
FROM inlist_tab
WHERE object_id IN (SELECT t.column_value
FROM table(MyCard(id_tt(100,101))) t)
---------------------------------------------------------------------
|Id|Operation |Name |E-Rows|A-Rows
---------------------------------------------------------------------
| 0|SELECT STATEMENT | | | 1
| 1| SORT AGGREGATE | | 1| 1
| 2| NESTED LOOPS | | | 2
| 3| NESTED LOOPS | | 2| 2
| 4| COLLECTION ITERATOR PICKLER FETCH|MYCARD | 2| 2
|*5| INDEX UNIQUE SCAN |INLIST_TAB_PK| 1| 2
| 6| TABLE ACCESS BY INDEX ROWID |INLIST_TAB | 1| 2
---------------------------------------------------------------------
Predicate Information (identified by operation id):
--------------------------------------------------
5 - access("OBJECT_ID"=VALUE(KOKBF$))
Valid stats
Constructor is wrapped
36. 36 of 44
Issue
Default:
1% ~ if you compare function to a literal, only every 1/100th row
satisfies the condition.
Problem:
If your function is heavily skewed CBO predicate analysis will
generate bad execution plans.
Solutions:
ODCI object method can adjust selectivity setting.
You can also hard code selectivity (if data is static):
ASSOCIATE STATISTICS WITH FUNCTIONS f_isSenior_yn
DEFAULT selectivity 50;
37. 37 of 44
Test Case
CREATE OR REPLACE FUNCTION f_isSenior_yn (i_job_id VARCHAR2)
RETURN VARCHAR2 IS
BEGIN
IF i_job_id IN ('AD_PRES','AD_VP') THEN
RETURN 'Y';
ELSE
RETURN 'N';
END IF;
END;
SQL> select f_isSenior_yn(job_id) isSenior_yn, count(*)
2 from hr.employees
3 group by f_isSenior_yn(job_id);
ISSENIOR_YN COUNT(*)
----------- ----------
Y 3
N 104
38. 38 of 44
Problem Illustration
SELECT /*+ gather_plan_statistics*/
e.*,
d.department_name
FROM hr.employees e,
hr.departments d
WHERE e.department_id = d.department_id
AND f_isSenior_yn(e.job_id)='N'
--------------------------------------------------------------------
|Id|Operation |Name |E-Rows|A-Rows|Buffers|Used-Mem |
--------------------------------------------------------------------
| 0|SELECT STATEMENT | | | 103| 14| |
|*1| HASH JOIN | | 1| 103| 14| 901K(0)|
|*2| TABLE ACCESS FULL|EMPLOYEES | 1| 104| 7| |
| 3| TABLE ACCESS FULL|DEPARTMENTS| 1| 27| 7| |
--------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("EMPLOYEES"."DEPARTMENT_ID"="DEPARTMENTS"."DEPARTMENT_ID")
2 - filter("F_ISSENIOR_YN"("EMPLOYEES"."JOB_ID")='N')
1% of 107 is way off!
Heavy memory usage
39. 39 of 44
Object Type (1)
CREATE TYPE MySelect_OT AS OBJECT (
dummy_attribute NUMBER,
STATIC FUNCTION ODCIGetInterfaces
(p_interfaces OUT SYS.ODCIObjectList) RETURN NUMBER,
STATIC FUNCTION ODCIStatsSelectivity (
p_pred_info IN SYS.ODCIPredInfo,
p_selectivity OUT NUMBER,
p_args IN SYS.ODCIArgDescList,
p_start IN VARCHAR2,
p_stop IN VARCHAR2,
i_job IN VARCHAR2,
p_env IN SYS.ODCIEnv)
RETURN NUMBER
);
40. 40 of 44
Object Type (2)
CREATE or replace TYPE BODY MySelect_OT AS
STATIC FUNCTION ODCIGetInterfaces …;
STATIC FUNCTION ODCIStatsSelectivity (
p_pred_info IN SYS.ODCIPredInfo,
p_selectivity OUT NUMBER,
p_args IN SYS.ODCIArgDescList,
p_start IN VARCHAR2,
p_stop IN VARCHAR2,
i_job IN VARCHAR2,
p_env IN SYS.ODCIEnv
) RETURN NUMBER IS
BEGIN
if p_start='Y' then
p_selectivity:=3;
else
p_selectivity:=97;
end if;
RETURN ODCIConst.success;
END ODCIStatsSelectivity;
END;
START is used
for ‘=‘ comparison
START and STOP are used
for ‘BETWEEN‘ comparison
41. 41 of 44
Impact of Statistics
ASSOCIATE STATISTICS WITH FUNCTIONS f_isSenior_yn USING MySelect_OT;
SELECT /*+ gather_plan_statistics*/ e.*,
d.department_name
FROM hr.employees e,
hr.departments d
WHERE e.department_id = d.department_id
AND f_isSenior_yn(e.job_id)='N'
-----------------------------------------------------------------------------
|Id|Operation |Name |E-Rows|A-Rows|Buffers|Used-Mem|
-----------------------------------------------------------------------------
| 0|SELECT STATEMENT | | | 103| 9| |
| 1| MERGE JOIN | | 103| 103| 9| |
| 2| TABLE ACCESS BY INDEX ROWID|DEPARTMENTS| 27| 27| 2| |
| 3| INDEX FULL SCAN |DEPT_ID_PK | 27| 27| 1| |
|*4| SORT JOIN | | 104| 103| 7|14336(0)|
|*5| TABLE ACCESS FULL |EMPLOYEES | 104| 104| 7| |
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("EMPLOYEES"."DEPARTMENT_ID"="DEPARTMENTS"."DEPARTMENT_ID")
filter("EMPLOYEES"."DEPARTMENT_ID"="DEPARTMENTS"."DEPARTMENT_ID")
5 - filter("F_ISSENIOR_YN"("EMPLOYEES"."JOB_ID")='N')
Different execution plan!
Much less memory
42. 42 of 44
Summary
Keeping Oracle Statistics up to date is important
… otherwise the CBO can get confused.
Manual management of PL/SQL function statistics is non-
trivial.
… so it should be used only when needed.
Using PL/SQL functions within SQL should be tightly
controlled
… because usually statistics are the least of your problems!
43. 43 of 44
Contact Information
Michael Rosenblum – mrosenblum@dulcian.com
Dulcian, Inc. website - www.dulcian.com
Blog: wonderingmisha.blogspot.com
Available NOW:
Oracle PL/SQL Performance Tuning Tips & Techniques
44. 44 of 44
Save the Date
COLLABORATE 17 registration will open on Thursday, October 27.
Call for Speakers
Submit your session presentation! The Call for Speakers is open until Friday,
October 7
collaborate.ioug.org