This document contains information about Patrick Barel including his contact details, work experience, blog and website links. It discusses database topics like data integrity, flashback queries, and temporal validity. Examples are provided to demonstrate how to ensure data integrity through constraints, use flashback queries to view past data states, and implement temporal tables to track valid time periods for rows.
Analytic SQL functions, or "window functions have been there since 8.1.6, but they are still dramatically underused by application developers. This session looks at the syntax and usage of analytic functions, and how they can supercharge your SQL skillset.
Covers analytics from their inception in 8.1.6 all the through to enhancements in 18 and 19
This document discusses techniques for joining data from multiple database tables, including:
- Performing equijoins to retrieve matching records between tables based on equality conditions.
- Using outer joins to also return rows with no matches between tables.
- Self joins to join a table to itself based on relationships between records in the same table.
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
This document discusses various SQL join queries using the EMP and DEPT tables in the Oracle database. It provides examples of inner joins, outer joins, natural joins, cross joins, and lateral joins. It explores different join types and syntax as well as filtering criteria and partitioning.
Slides from the ITOUG events in Rome and Milan 2020.
Most people think of the Flashback features in Oracle as the "In Case of Emergency" switch, to only be used when some catastrophe has occurred on your database. And while it is true that Flashback will definitely help you 3 seconds after you press the Commit button and you realise that you probably needed to have a WHERE clause on that "delete all rows from the SALES table" SQL statement. Or for when you run "drop table" on the Production database, when you were just so sure that you were logged onto the Test system. But Flashback is not only for those "Oh No!" moments. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
Another year goes by, and most likely, another data access framework has been invented. It will claim to be the fastest, smartest way to talk to the database, and just like all those that came before it, it will not be. Because the best database access tool has been there for more than 30 years now, and that is PL/SQL. Although we all sometimes fall prey to the mindset of “Oh look, a shiny new tool, we should start using it," the performance and simplicity of PL/SQL remain unmatched. This session looks at the failings of other data access languages, why even a cursory knowledge of PL/SQL will make you a better developer, and how to get the most out of PL/SQL when it comes to database performance.
Analytic SQL functions, or "window functions have been there since 8.1.6, but they are still dramatically underused by application developers. This session looks at the syntax and usage of analytic functions, and how they can supercharge your SQL skillset.
Covers analytics from their inception in 8.1.6 all the through to enhancements in 18 and 19
This document discusses techniques for joining data from multiple database tables, including:
- Performing equijoins to retrieve matching records between tables based on equality conditions.
- Using outer joins to also return rows with no matches between tables.
- Self joins to join a table to itself based on relationships between records in the same table.
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
This document discusses various SQL join queries using the EMP and DEPT tables in the Oracle database. It provides examples of inner joins, outer joins, natural joins, cross joins, and lateral joins. It explores different join types and syntax as well as filtering criteria and partitioning.
Slides from the ITOUG events in Rome and Milan 2020.
Most people think of the Flashback features in Oracle as the "In Case of Emergency" switch, to only be used when some catastrophe has occurred on your database. And while it is true that Flashback will definitely help you 3 seconds after you press the Commit button and you realise that you probably needed to have a WHERE clause on that "delete all rows from the SALES table" SQL statement. Or for when you run "drop table" on the Production database, when you were just so sure that you were logged onto the Test system. But Flashback is not only for those "Oh No!" moments. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
Another year goes by, and most likely, another data access framework has been invented. It will claim to be the fastest, smartest way to talk to the database, and just like all those that came before it, it will not be. Because the best database access tool has been there for more than 30 years now, and that is PL/SQL. Although we all sometimes fall prey to the mindset of “Oh look, a shiny new tool, we should start using it," the performance and simplicity of PL/SQL remain unmatched. This session looks at the failings of other data access languages, why even a cursory knowledge of PL/SQL will make you a better developer, and how to get the most out of PL/SQL when it comes to database performance.
Sangam 19 - Successful Applications on AutonomousConnor McDonald
The autonomous database offers insane levels of performance, but you won't be able to attain that if you are not constructing your SQL statements in a way that is scalable...and more importantly, secure from hacking
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
This document discusses techniques for joining data from multiple database tables. It covers equijoins to retrieve matching records between tables, non-equijoins to retrieve records with non-equal conditions, outer joins to include non-matching records, and self joins to join a table to itself. Examples are provided to demonstrate how to write SQL statements to perform each type of join and include additional conditions.
This document discusses data manipulation in Oracle databases. It describes how to insert, update, and delete rows from tables using DML statements like INSERT, UPDATE, and DELETE. It also covers transaction management using COMMIT, ROLLBACK, and SAVEPOINT statements to control when changes are committed to the database or rolled back. Key aspects covered include inserting new rows, updating and deleting specific rows or all rows, and handling integrity constraints and transactions.
This document discusses how to restrict and sort data retrieved by SQL queries. It covers limiting rows using the WHERE clause and comparison operators, and sorting rows using the ORDER BY clause. Examples are provided for each clause and operator to filter rows by conditions and sort the output in ascending or descending order on one or multiple columns. Parentheses can be used to override the default order of operation precedence.
This document discusses different types of joins that can be used to retrieve data from multiple database tables. It describes equijoins, non-equijoins, outer joins, and self joins. Key points include:
- Joins allow querying data from more than one table by specifying a join condition in the WHERE clause relating column values across tables.
- Equijoins use equality (=) conditions to join tables, while non-equijoins use between operators.
- Outer joins retrieve all rows from one or both tables, including those with no join match.
- Self joins match a table to itself to relate records via non-primary key columns, such as relating employees to their managers.
This document discusses five things that have improved SQL capabilities, including analytics and pipelined functions. It provides examples of using analytics for tasks like running totals, percentages, rankings, and moving averages. Pipelined functions allow SQL to treat table functions like regular database tables, improving performance of ETL processes. The document also discusses how pipelined functions help with cardinality estimates by associating statistics types.
Comparative Genomics with GMOD and BioPerlJason Stajich
BioPerl is an open source toolkit for bioinformatics data manipulation written in Perl. It contains modules for reading and writing sequence data in common formats, manipulating sequences, parsing BLAST reports and multiple sequence alignments. BioPerl objects represent sequences, features, annotations and search results in a flexible and extensible way. The toolkit is widely used for tasks like sequence analysis, parsing bioinformatics software output, and accessing biological databases.
Group functions operate on sets of rows to give one result per group. Some key points covered in the document include:
1. Group functions like AVG, COUNT, MAX, MIN, SUM allow aggregating data across rows grouped by columns like department.
2. The GROUP BY clause is used to divide rows into groups and column(s) in the SELECT that are not aggregate functions must be in the GROUP BY.
3. The HAVING clause is used to filter groups based on conditions with aggregate functions and comes after GROUP BY.
Sangam 18 - Great Applications with Great SQLConnor McDonald
Get more out of the database by exploiting the rich suite of features available with the SQL language. We cover pivot, unpivot, rollup, pagination, error logging, query block naming
1. The document describes adding new redo log groups to an Oracle RAC database with two nodes. It checks the current log files, adds new log file groups of size 100MB to threads 1 and 2 using ALTER DATABASE commands, and verifies the new log files were added.
2. The document then logs into each node and runs ALTER SYSTEM SWITCH LOGFILE commands to switch the redo threads and make the original 50MB files inactive.
3. Finally, it drops the inactive log file groups from each thread using ALTER DATABASE DROP LOGFILE commands.
The document discusses various ways to concatenate or aggregate column values in Oracle databases. Older methods like XMLAGG, CONNECT BY, and custom aggregate functions are compared to the simpler LISTAGG function available in Oracle 11g and higher. Upgrading to newer database versions brings improved developer productivity through easier string aggregation queries.
This document discusses relational databases and the Oracle implementation. It introduces relational database concepts like tables, relations, keys and SQL. It describes how Oracle uses SQL and the PL/SQL programming language to interact with and manage data in a relational database management system. PL/SQL allows embedding SQL statements in procedural code blocks for data manipulation and queries.
The document provides an overview of basic SQL statements and capabilities. It discusses using SELECT statements to project columns, select rows, and join tables. It also covers arithmetic expressions, column aliases, concatenation, and eliminating duplicate rows. SQL statements are executed through the SQL*Plus environment, which allows editing, saving, and running SQL code and commands.
This document discusses views in Oracle databases. It defines a view as a virtual table derived from one or more underlying base tables or other views. The document covers how to create simple and complex views, modify view definitions, perform DML operations on views, and remove views. Key points include that views allow restricting access, simplifying queries, and presenting different perspectives of the same data without affecting the base tables.
Latin America Tour 2019 - 10 great sql featuresConnor McDonald
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
Subqueries allow queries to be nested within other queries. They can return either single rows or multiple rows. Single-row subqueries use comparison operators like = or >, while multiple-row subqueries use operators like IN, ANY, or ALL. Subqueries execute first before the outer or main query uses the results. This allows the main query to filter or compare based on unknown values from the subquery.
This document discusses how to limit and sort data retrieved from a database table using SQL queries. It covers using the WHERE clause to restrict rows by conditions, comparison operators like = and BETWEEN, logical operators like AND and OR, and the ORDER BY clause to sort rows in ascending or descending order based on one or more columns. Examples are provided for each technique.
Civic crowdfunding, maatschappelijk crowdfunding, voor de Democratic Challenge. Met welke pivots werkten wij de afgelopen jaar en welke nieuwe aannames hebben wij gedaan om het beleid als Start-up verder vorm te geven
Sangam 19 - Successful Applications on AutonomousConnor McDonald
The autonomous database offers insane levels of performance, but you won't be able to attain that if you are not constructing your SQL statements in a way that is scalable...and more importantly, secure from hacking
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
This document discusses techniques for joining data from multiple database tables. It covers equijoins to retrieve matching records between tables, non-equijoins to retrieve records with non-equal conditions, outer joins to include non-matching records, and self joins to join a table to itself. Examples are provided to demonstrate how to write SQL statements to perform each type of join and include additional conditions.
This document discusses data manipulation in Oracle databases. It describes how to insert, update, and delete rows from tables using DML statements like INSERT, UPDATE, and DELETE. It also covers transaction management using COMMIT, ROLLBACK, and SAVEPOINT statements to control when changes are committed to the database or rolled back. Key aspects covered include inserting new rows, updating and deleting specific rows or all rows, and handling integrity constraints and transactions.
This document discusses how to restrict and sort data retrieved by SQL queries. It covers limiting rows using the WHERE clause and comparison operators, and sorting rows using the ORDER BY clause. Examples are provided for each clause and operator to filter rows by conditions and sort the output in ascending or descending order on one or multiple columns. Parentheses can be used to override the default order of operation precedence.
This document discusses different types of joins that can be used to retrieve data from multiple database tables. It describes equijoins, non-equijoins, outer joins, and self joins. Key points include:
- Joins allow querying data from more than one table by specifying a join condition in the WHERE clause relating column values across tables.
- Equijoins use equality (=) conditions to join tables, while non-equijoins use between operators.
- Outer joins retrieve all rows from one or both tables, including those with no join match.
- Self joins match a table to itself to relate records via non-primary key columns, such as relating employees to their managers.
This document discusses five things that have improved SQL capabilities, including analytics and pipelined functions. It provides examples of using analytics for tasks like running totals, percentages, rankings, and moving averages. Pipelined functions allow SQL to treat table functions like regular database tables, improving performance of ETL processes. The document also discusses how pipelined functions help with cardinality estimates by associating statistics types.
Comparative Genomics with GMOD and BioPerlJason Stajich
BioPerl is an open source toolkit for bioinformatics data manipulation written in Perl. It contains modules for reading and writing sequence data in common formats, manipulating sequences, parsing BLAST reports and multiple sequence alignments. BioPerl objects represent sequences, features, annotations and search results in a flexible and extensible way. The toolkit is widely used for tasks like sequence analysis, parsing bioinformatics software output, and accessing biological databases.
Group functions operate on sets of rows to give one result per group. Some key points covered in the document include:
1. Group functions like AVG, COUNT, MAX, MIN, SUM allow aggregating data across rows grouped by columns like department.
2. The GROUP BY clause is used to divide rows into groups and column(s) in the SELECT that are not aggregate functions must be in the GROUP BY.
3. The HAVING clause is used to filter groups based on conditions with aggregate functions and comes after GROUP BY.
Sangam 18 - Great Applications with Great SQLConnor McDonald
Get more out of the database by exploiting the rich suite of features available with the SQL language. We cover pivot, unpivot, rollup, pagination, error logging, query block naming
1. The document describes adding new redo log groups to an Oracle RAC database with two nodes. It checks the current log files, adds new log file groups of size 100MB to threads 1 and 2 using ALTER DATABASE commands, and verifies the new log files were added.
2. The document then logs into each node and runs ALTER SYSTEM SWITCH LOGFILE commands to switch the redo threads and make the original 50MB files inactive.
3. Finally, it drops the inactive log file groups from each thread using ALTER DATABASE DROP LOGFILE commands.
The document discusses various ways to concatenate or aggregate column values in Oracle databases. Older methods like XMLAGG, CONNECT BY, and custom aggregate functions are compared to the simpler LISTAGG function available in Oracle 11g and higher. Upgrading to newer database versions brings improved developer productivity through easier string aggregation queries.
This document discusses relational databases and the Oracle implementation. It introduces relational database concepts like tables, relations, keys and SQL. It describes how Oracle uses SQL and the PL/SQL programming language to interact with and manage data in a relational database management system. PL/SQL allows embedding SQL statements in procedural code blocks for data manipulation and queries.
The document provides an overview of basic SQL statements and capabilities. It discusses using SELECT statements to project columns, select rows, and join tables. It also covers arithmetic expressions, column aliases, concatenation, and eliminating duplicate rows. SQL statements are executed through the SQL*Plus environment, which allows editing, saving, and running SQL code and commands.
This document discusses views in Oracle databases. It defines a view as a virtual table derived from one or more underlying base tables or other views. The document covers how to create simple and complex views, modify view definitions, perform DML operations on views, and remove views. Key points include that views allow restricting access, simplifying queries, and presenting different perspectives of the same data without affecting the base tables.
Latin America Tour 2019 - 10 great sql featuresConnor McDonald
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
Subqueries allow queries to be nested within other queries. They can return either single rows or multiple rows. Single-row subqueries use comparison operators like = or >, while multiple-row subqueries use operators like IN, ANY, or ALL. Subqueries execute first before the outer or main query uses the results. This allows the main query to filter or compare based on unknown values from the subquery.
This document discusses how to limit and sort data retrieved from a database table using SQL queries. It covers using the WHERE clause to restrict rows by conditions, comparison operators like = and BETWEEN, logical operators like AND and OR, and the ORDER BY clause to sort rows in ascending or descending order based on one or more columns. Examples are provided for each technique.
Civic crowdfunding, maatschappelijk crowdfunding, voor de Democratic Challenge. Met welke pivots werkten wij de afgelopen jaar en welke nieuwe aannames hebben wij gedaan om het beleid als Start-up verder vorm te geven
This document analyzes data from 185 successfully funded video game projects on Kickstarter before December 2012. It finds that 27 projects raised over $250,000 each, totaling $29 million, while 78 projects raised between $10,000-$24,000. Adventure, RPG, and strategy games were most popular. Since September 2012, the average amount raised increased to $196,000 from $153,000, and more projects raised over $50,000, indicating Kickstarter's continued momentum for video game funding.
Makalah ini menganalisis pengembangan sistem pendaftaran kadastral 2D dan 3D di Malaysia berdasarkan LADM dengan mengintegrasikan database kantor pertanahan dan pemetaan menggunakan Unique Parcel Identifier. Metode penelitian menggunakan data kadastral 2D dan 3D dari dua lembaga tersebut untuk dianalisis menggunakan LADM. Hasilnya adalah model data spasial dan non-spasial yang dapat mengintegrasikan sistem pendaftaran
The document discusses the importance of effective listening in healthcare. It identifies three types of listening: coordinated care between providers, collaborated care between providers and patients/caregivers, and committed care throughout the entire healthcare system. It provides dos and don'ts for creating a listening environment, such as knowing the patient, allowing time with providers, and establishing connections, versus restricting time or discounting community input. The ideas are drawn from the book "In Pain We Trust" about the journey from sickness to health.
ICO Media is a pan-European PR agency specializing in video games. They provide bespoke PR services for online games and indie games, reaching over 2,000 media contacts across 24 European countries. ICO works in-house to localize communications into multiple languages and provides monthly reporting with in-depth analysis of media coverage using proprietary tools. Case studies demonstrate successful partnerships with major online game developers like Riot Games and indie studios like Shiro Games.
Domino's Pizza sedang berhasil saat ini dengan strategi baru mereka, yaitu merevisi resep pizza untuk meningkatkan kualitas dan rasa. Strategi ini berhasil menarik lebih banyak pembeli setelah sebelumnya mereka mendapat kritik negatif tentang produk mereka. Domino's Pizza melakukan kampanye global untuk memperbaiki citra mereka dengan merubah resep, proses memasak, dan penyajian pizza.
Accelerating Our Path to Multi Platform BenefitsIT@Intel
This is a time of tremendous change for IT organizations everywhere.
Intel IT realized we need to enable enterprise applications to support the devices of today (touch) and also develop the applications so they are ready for the next big thing (voice and gesture). We’ve kicked-off a new initiative that focuses on accelerating delivery of applications to our business partners and employees on their mobile platform(s) of choice.
This document contains advice and motivational sayings encouraging the reader to stop complaining and crying over problems, and instead take action by initiating solutions and concentrating on goals. It emphasizes showing one's capabilities through performance rather than just talking, and facing challenges head on rather than blaming others or making excuses. The overall message is to be positive, unique, and a leader rather than a follower.
This document discusses how to build a strong personal brand. It recommends conducting a personal audit to understand your values, vision, goals, strengths and passions. It also suggests assessing if your current job aligns with your skills and personality. Key aspects of developing a strong personal brand include understanding what sets you apart from others, getting feedback on how people perceive you, developing your network on platforms like LinkedIn, crafting an elevator pitch, and being aware of how your manner, appearance, communication style and confidence level impact how others see your brand. The document provides a checklist of elements to consider when developing and promoting your personal brand identity.
Kebijakan penurunan aki akb dinas kesehatan provinsi sulawesi barat 27 11_2016Muh Saleh
Kebijakan Dinas Kesehatan Provinsi Sulawesi Barat dalam menurunkan angka kematian ibu dan bayi meliputi peningkatan pelayanan kesehatan ibu dan anak, peningkatan kapasitas petugas kesehatan, serta penguatan kerja sama antara bidan dan dukun untuk menurunkan angka kematian ibu dan bayi di Sulawesi Barat.
ИБ АСУ ТП NON-STOP. Серия 1. Архитектура и основные компоненты АСУ ТП с точки...Компания УЦСБ
Вводная серия посвящена обзору базовых понятий, объектам управления и комплексам, уровням управления, функциям АСУ ТП и обобщённой структуре современной АСУ ТП. При рассмотрении данной темы акцент был сделан на нюансы, связанные с вопросами обеспечения информационной безопасности.
Дата вебинара 22 октября 2015 года.
Запись доступна на канале YouTube: https://youtu.be/3gFz7oc1ang
Докладчик: Николай Домуховский.
Slides from OpenWorld 2013 presentation.
Analytics have been there since 8.1.6, but they are still dramatically underused by application developers. This session looks at the syntax and usage of analytic functions, and how they can supercharge your SQL skillset.
This document discusses restricting and sorting data in SQL queries. It describes how to limit rows returned using a WHERE clause and comparison operators. It also explains how to sort rows using an ORDER BY clause, including sorting in ascending and descending order and by multiple columns. Logical operators, precedence rules, and NULL handling are also covered.
This document discusses how to limit and sort data retrieved by SQL queries. It describes using the WHERE clause to restrict rows selected and comparison operators like =, <, and BETWEEN to define conditions. The ORDER BY clause allows sorting retrieved rows in ascending or descending order based on column values. Logical operators like AND, OR, and NOT can combine conditions.
The document discusses various SQL clauses and operators for restricting, sorting, and filtering data retrieved from database tables. It covers the WHERE clause for filtering rows, comparison operators like = and BETWEEN, logical operators like AND and OR, NULL handling, pattern matching with LIKE, and sorting order with ORDER BY. Examples are provided to demonstrate how each technique can be used in a SELECT statement to selectively retrieve only the needed data from database tables.
This document discusses restricting and sorting data in SQL queries. It covers limiting rows using the WHERE clause with various operators like comparison, logical, BETWEEN, IN, LIKE and IS NULL. It also covers sorting result rows using the ORDER BY clause, including sorting in ascending and descending order and by multiple columns. The overall goal is to select specific data rows and control the order of rows displayed in the results.
These slides are from OpenWorld 2018. We covered
- conventional CONNECT BY syntax in Oracle
- extensions that were introduced in more recent versions
- the recursive WITH clause
This document discusses how to limit and sort data retrieved by SQL queries. It describes using the WHERE clause to restrict rows selected and comparison operators like =, <, and BETWEEN to define conditions. The ORDER BY clause allows sorting retrieved rows in ascending or descending order based on column values. Logical operators like AND, OR, and NOT can combine conditions.
The document appears to be a presentation on window functions in SQL. It provides examples of different types of window functions including ranking functions like RANK, DENSE_RANK, and ROW_NUMBER. It also shows aggregation functions used in window functions like SUM, COUNT, and AVG. Examples calculate ranking of employee hires and running totals of salaries by department. The presentation emphasizes the simple syntax of window functions and their use in analytic queries.
The document contains SQL commands that create tables, insert data, and perform queries on the tables. The tables created are studies, software, and programmer. Data is inserted and various queries are run to retrieve, aggregate, and analyze the data. Key information summarized includes:
- Tables were created to store student studies data, software project data, and programmer details.
- Data was inserted into the tables and various queries were run to retrieve, calculate statistics on, and analyze the data across the tables.
- Queries included finding averages, minimums, maximums, counts, sums, and using functions like trunc, round, and to_char to manipulate dates and strings.
OpenWorld 2018 - Common Application Developer DisastersConnor McDonald
Two of the critical requirements of a database are:
- run fast
- data integrity
The database can achieve these things, but only as long as you understand the mechanisms correctly. If you don't, then things can go downhill fast.
This document discusses several features introduced in Oracle Database 12c and 18c that improve the handling of SQL and PL/SQL code. It covers longer identifier names, compile-time resolvable expression sizes, improved overflow handling for listagg, column-level collation, and deprecating code. Examples are provided to demonstrate the usage and benefits of each feature.
Slides from the ILOUG 2019 conference. At every conference, you’ll see plenty about the latest and greatest release of Oracle. You’ll sigh as you think that it will be 10 years before you get to these versions at your workplace. This presentation is for you – we’ll look at the cool underused SQL features available going all the way back to Oracle 8.
حل اسئلة الكتاب السعودى فى شرح قواعد البيانات اوراكلMohamed Moustafa
The document contains examples of SQL queries on Oracle database tables. It shows how to retrieve data from the DEPT and EMP tables, including selecting columns, filtering rows, sorting results, and performing calculations. Various SQL functions are demonstrated like INITCAP, LENGTH, SUBSTR, and date formatting. The last section aggregates data from the EMP table using MAX, MIN, SUM, and AVG.
This document discusses features of Oracle Database 12c related to auditing and tracking changes over time. It summarizes that Oracle 12c includes flashback data archive, which allows viewing or restoring data to a previous state. This feature can be used for auditing and tracking changes made to database tables. The document also discusses how Oracle 12c captures additional context metadata with each change, including user, host, and program used, allowing more detailed tracking of changes than prior releases.
This document is the introduction to a SQL course for data professionals. It introduces SQL as a language for storing, manipulating, and accessing data in relational databases. The course covers basic SQL topics like SELECT statements, WHERE clauses, ORDER BY, and GROUP BY, as well as more advanced concepts like joins, subqueries, and regular expressions. It is aimed at beginners but also covers intermediate and advanced SQL.
The document discusses using cursors and triggers in SQL procedures. It describes how cursors allow defining and iterating through a result set row by row. Triggers allow running code automatically when data changes, such as on insert, update or delete operations. An example cursor program selects the top 5 paid employees from a table and inserts their data into another table. An example trigger inserts a record into a log table each time a row is updated in another table.
After completing this lesson, you should be able to do the following:
Describe a view
Create a view
Retrieve data through a view
Alter the definition of a view
Insert, update, and delete data through a view
Drop a view
Similar to Get your moneys worth out of your database (20)
1.) Introduction
Our Movement is not new; it is the same as it was for Freedom, Justice, and Equality since we were labeled as slaves. However, this movement at its core must entail economics.
2.) Historical Context
This is the same movement because none of the previous movements, such as boycotts, were ever completed. For some, maybe, but for the most part, it’s just a place to keep your stable until you’re ready to assimilate them into your system. The rest of the crabs are left in the world’s worst parts, begging for scraps.
3.) Economic Empowerment
Our Movement aims to show that it is indeed possible for the less fortunate to establish their economic system. Everyone else – Caucasian, Asian, Mexican, Israeli, Jews, etc. – has their systems, and they all set up and usurp money from the less fortunate. So, the less fortunate buy from every one of them, yet none of them buy from the less fortunate. Moreover, the less fortunate really don’t have anything to sell.
4.) Collaboration with Organizations
Our Movement will demonstrate how organizations such as the National Association for the Advancement of Colored People, National Urban League, Black Lives Matter, and others can assist in creating a much more indestructible Black Wall Street.
5.) Vision for the Future
Our Movement will not settle for less than those who came before us and stopped before the rights were equal. The economy, jobs, healthcare, education, housing, incarceration – everything is unfair, and what isn’t is rigged for the less fortunate to fail, as evidenced in society.
6.) Call to Action
Our movement has started and implemented everything needed for the advancement of the economic system. There are positions for only those who understand the importance of this movement, as failure to address it will continue the degradation of the people deemed less fortunate.
No, this isn’t Noah’s Ark, nor am I a Prophet. I’m just a man who wrote a couple of books, created a magnificent website: http://www.thearkproject.llc, and who truly hopes to try and initiate a truly sustainable economic system for deprived people. We may not all have the same beliefs, but if our methods are tried, tested, and proven, we can come together and help others. My website: http://www.thearkproject.llc is very informative and considerably controversial. Please check it out, and if you are afraid, leave immediately; it’s no place for cowards. The last Prophet said: “Whoever among you sees an evil action, then let him change it with his hand [by taking action]; if he cannot, then with his tongue [by speaking out]; and if he cannot, then, with his heart – and that is the weakest of faith.” [Sahih Muslim] If we all, or even some of us, did this, there would be significant change. We are able to witness it on small and grand scales, for example, from climate control to business partnerships. I encourage, invite, and challenge you all to support me by visiting my website.
11June 2024. An online pre-engagement session was organized on Tuesday June 11 to introduce the Science Policy Lab approach and the main components of the conceptual framework.
About 40 experts from around the globe gathered online for a pre-engagement session, paving the way for the first SASi-SPi Science Policy Lab event scheduled for June 18-19, 2024 in Malmö. The session presented the objectives for the upcoming Science Policy Lab (S-PoL), which featured a role-playing game designed to simulate stakeholder interactions and policy interventions for food systems transitions. Participants called for the sharing of meeting materials and continued collaboration, reflecting a strong commitment to advancing towards sustainable agrifood systems.
This presentation by Katharine Kemp, Associate Professor at the Faculty of Law & Justice at UNSW Sydney, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
The importance of sustainable and efficient computational practices in artificial intelligence (AI) and deep learning has become increasingly critical. This webinar focuses on the intersection of sustainability and AI, highlighting the significance of energy-efficient deep learning, innovative randomization techniques in neural networks, the potential of reservoir computing, and the cutting-edge realm of neuromorphic computing. This webinar aims to connect theoretical knowledge with practical applications and provide insights into how these innovative approaches can lead to more robust, efficient, and environmentally conscious AI systems.
Webinar Speaker: Prof. Claudio Gallicchio, Assistant Professor, University of Pisa
Claudio Gallicchio is an Assistant Professor at the Department of Computer Science of the University of Pisa, Italy. His research involves merging concepts from Deep Learning, Dynamical Systems, and Randomized Neural Systems, and he has co-authored over 100 scientific publications on the subject. He is the founder of the IEEE CIS Task Force on Reservoir Computing, and the co-founder and chair of the IEEE Task Force on Randomization-based Neural Networks and Learning Systems. He is an associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS).
This presentation by OECD, OECD Secretariat, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
Gamify it until you make it Improving Agile Development and Operations with ...Ben Linders
So many challenges, so little time. While we’re busy developing software and keeping it operational, we also need to sharpen the saw, but how? Gamification can be a way to look at how you’re doing and find out where to improve. It’s a great way to have everyone involved and get the best out of people.
In this presentation, Ben Linders will show how playing games with the DevOps coaching cards can help to explore your current development and deployment (DevOps) practices and decide as a team what to improve or experiment with.
The games that we play are based on an engagement model. Instead of imposing change, the games enable people to pull in ideas for change and apply those in a way that best suits their collective needs.
By playing games, you can learn from each other. Teams can use games, exercises, and coaching cards to discuss values, principles, and practices, and share their experiences and learnings.
Different game formats can be used to share experiences on DevOps principles and practices and explore how they can be applied effectively. This presentation provides an overview of playing formats and will inspire you to come up with your own formats.
This presentation by Tim Capel, Director of the UK Information Commissioner’s Office Legal Service, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
• For a full set of 530+ questions. Go to
https://skillcertpro.com/product/servicenow-cis-itsm-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
2. 2
• Patrick Barel
• Working with Oracle since 1997
• Working with PL/SQL since 1999
• Playing with APEX since 2003 (mod_plsql)
• ACE since 2011
• OCA since December 20th 2012
• OCP since February 20th 2014
• Developer Choice Award 2015
About me…
12. 12
Data integrity/quality
insert into demo.emp_unprotected values (7369, 'SMITH', 800, 20);
insert into demo.emp_unprotected values (7499, 'ALLEN', 1600, 30);
insert into demo.emp_unprotected values (7521, 'WARD', 1250, 30);
insert into demo.emp_unprotected values (7566, 'JONES', 2975, 20);
…
insert into demo.emp_unprotected values (7900, 'JAMES', 950, 30);
insert into demo.emp_unprotected values (7902, 'FORD', 3000, 20);
insert into demo.emp_unprotected values (7934, 'MILLER', 1300, 10);
13. 13
Data integrity/quality
update demo.emp_unprotected e
set e.sal = 2 * e.sal
• Salary not over 7500
• No duplicate
employee numbers
• Existing department
update demo.emp_unprotected e
set e.deptno = 50
where e.ename = 'SCOTT'
insert into demo.emp_unprotected
(empno, ename, sal, deptno)
values (7900, 'BAREL', 1000, 10)
14. 14
Data integrity/quality
update demo.emp_protected e
set e.sal = 2 * e.sal
• Salary not over 7500
• No duplicate
employee numbers
• Existing department
update demo.emp_protected e
set e.deptno = 50
where e.ename = 'SCOTT'
insert into demo.emp_protected
(empno, ename, sal, deptno)
values (7900, 'BAREL', 1000, 10)
alter table demo.emp_protected add constraint sal_under_7500 check ((sal < 7500))
alter table demo.emp_protected add constraint pk_emp_protected primary key (empno)
alter table demo.emp_protected add constraint fk_emp_protected_dept
foreign key (deptno) references demo.dept (deptno)
dataintegrity.sql
17. 17
Flashback queries
…
insert into emp_fb values (7782, 'CLARK', 2450, 10);
insert into emp_fb values (7788, 'SCOTT', 3000, 20);
insert into emp_fb values (7839, 'KING', 5000, 10);
insert into emp_fb values (7844, 'TURNER', 1500, 30);
insert into emp_fb values (7876, 'ADAMS', 1100, 20);
insert into emp_fb values (7900, 'JAMES', 950, 30);
insert into emp_fb values (7902, 'FORD', 3000, 20);
insert into emp_fb values (7934, 'MILLER', 1300, 10);
18. 18
Flashback queries
select e.empno, e.ename, e.sal
from emp_fb e
where e.deptno = 10
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2650.00
7839 KING 5200.00
7934 MILLER 1500.00
19. 19
Flashback queries
select e.empno, e.ename, e.sal
from emp_fb e
where e.deptno = 10
select e.empno, e.ename, e.sal
from emp_fb e
where e.deptno = 10
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2650.00
7839 KING 5200.00
7934 MILLER 1500.00
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2650.00
7839 KING 5200.00
7934 MILLER 1500.00
update emp_fb e
set e.sal = e.sal + 200
where e.deptno = 10
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2850.00
7839 KING 5400.00
7934 MILLER 1700.00
20. 20
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2850.00
7839 KING 5400.00
7934 MILLER 1700.00
Flashback queries
select e.empno, e.ename, e.sal
from emp_fb e
where e.deptno = 10
select e.empno, e.ename, e.sal
from emp_fb e
where e.deptno = 10
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2650.00
7839 KING 5200.00
7934 MILLER 1500.00
update emp_fb e
set e.sal = e.sal + 200
where e.deptno = 10
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2850.00
7839 KING 5400.00
7934 MILLER 1700.00
commit
21. 21
Flashback queries
…
insert into emp_fb values (7782, 'CLARK', 2450, 10);
insert into emp_fb values (7788, 'SCOTT', 3000, 20);
insert into emp_fb values (7839, 'KING', 5000, 10);
insert into emp_fb values (7844, 'TURNER', 1500, 30);
insert into emp_fb values (7876, 'ADAMS', 1100, 20);
insert into emp_fb values (7900, 'JAMES', 950, 30);
insert into emp_fb values (7902, 'FORD', 3000, 20);
insert into emp_fb values (7934, 'MILLER', 1300, 10);
truncate table emp_fb
22. 22
Flashback queries
select e.empno, e.ename, e.sal
from emp_fb e
where e.deptno = 10
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2650.00
7839 KING 5200.00
7934 MILLER 1500.00
23. 23
Flashback queries
select e.empno, e.ename, e.sal
from emp_fb e
where e.deptno = 10
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2650.00
7839 KING 5200.00
7934 MILLER 1500.00
update emp_fb e
set e.sal = e.sal + 200
where e.deptno = 10
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2850.00
7839 KING 5400.00
7934 MILLER 1700.00
24. 24
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2850.00
7839 KING 5400.00
7934 MILLER 1700.00
Flashback queries
select e.empno, e.ename, e.sal
from emp_fb e
where e.deptno = 10
update emp_fb e
set e.sal = e.sal + 200
where e.deptno = 10
commit
25. 25
Flashback queries
select e.empno, e.ename, e.sal
from demo.emp_fb
as of timestamp to_timestamp(sysdate - 1/(24*60*60)) e
where e.deptno = 10
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2650.00
7839 KING 5200.00
7934 MILLER 1500.00
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2850.00
7839 KING 5400.00
7934 MILLER 1700.00
26. 26
Flashback queries
select e.empno, e.ename, e.sal
from demo.emp_fb
as of timestamp to_timestamp(sysdate - 1/(24*60*60)) e
where e.deptno = 10
EMPNO ENAME SAL
----- ---------- ---------
7782 CLARK 2650.00
7839 KING 5200.00
7934 MILLER 1500.00
select e.empno, e.ename, e.sal
from demo.emp_fb
as of scn 23352983 e
where e.deptno = 10
flashback_fb1.sql
flashback_fb2.sql
flashback_fb3.sql
29. 29
Temporal validity
insert into demo.customers (id, name, address) values
(1, 'Patrick', '1 Oxford Street')
insert into demo.customers (id, name, address) values
(2, 'Steven', '57 Chicago Boulevard')
select * from demo.customers c
order by c.id
ID NAME ADDRESS
--- ------------------------- -------------------------
1 Patrick 1 Oxford Street
2 Steven 57 Chicago Boulevard
30. 30
Temporal validity
update demo.customers c
set c.address = '1 Oracle Lane'
where c.id = 1
select * from demo.customers c
order by c.id
ID NAME ADDRESS
--- ------------------------- -------------------------
1 Patrick 1 Oracle Lane
2 Steven 57 Chicago Boulevard
31. 31
Temporal validity
truncate table demo.customers
insert into demo.customers (id, name, address) values
(1, 'Patrick', '1 Oxford Street')
insert into demo.customers (id, name, address) values
(2, 'Steven', '57 Chicago Boulevard')
select * from demo.customers c
order by c.id
ID NAME ADDRESS
--- ------------------------- -------------------------
1 Patrick 1 Oxford Street
2 Steven 57 Chicago Boulevard
alter table demo.customers add period for valid_time
32. 32
Temporal validity
truncate table demo.customers
insert into demo.customers (id, name, address) values
(1, 'Patrick', '1 Oxford Street')
insert into demo.customers (id, name, address) values
(2, 'Steven', '57 Chicago Boulevard')
ID NAME ADDRESS
--- ------------------------- -------------------------
1 Patrick 1 Oxford Street
2 Steven 57 Chicago Boulevard
select * from demo.customers c
order by c.id
alter table demo.customers add period for valid_time
33. 33
Temporal validity
select c.id
, c.name
, c.address
, to_char(c.valid_time_start, 'MM-DD-YYYY') valid_from
, to_char(c.valid_time_end, 'MM-DD-YYYY') valid_till
from demo.customers c
order by c.id
ID NAME ADDRESS VALID_FROM VALID_TILL
--- ------------------------- ------------------------- ---------- ----------
1 Patrick 1 Oxford Street
2 Steven 57 Chicago Boulevard
34. 34
Temporal validity
select c.id
, c.name
, c.address
, to_char(c.valid_time_start, 'MM-DD-YYYY') valid_from
, to_char(c.valid_time_end, 'MM-DD-YYYY') valid_till
from demo.customers c
order by c.id
update demo.customers c
set c.valid_time_end = to_date('06012016 235959'
,'MMDDYYYY HH24MISS')
where c.id = 1
ID NAME ADDRESS VALID_FROM VALID_TILL
--- ------------------------- ------------------------- ---------- ----------
1 Patrick 1 Oxford Street
2 Steven 57 Chicago Boulevard
ID NAME ADDRESS VALID_FROM VALID_TILL
--- ------------------------- ------------------------- ---------- ----------
1 Patrick 1 Oxford Street 06-01-2016
2 Steven 57 Chicago Boulevard
35. 35
ID NAME ADDRESS VALID_FROM VALID_TILL
--- ------------------------- ------------------------- ---------- ----------
1 Patrick 1 Oxford Street 06-01-2016
1 Patrick 1 Oracle Lane 06-02-2016
2 Steven 57 Chicago Boulevard
Temporal validity
select c.id
, c.name
, c.address
, to_char(c.valid_time_start, 'MM-DD-YYYY') valid_from
, to_char(c.valid_time_end, 'MM-DD-YYYY') valid_till
from demo.customers c
order by c.id
update demo.customers c
set c.valid_time_end = to_date('06012016 235959'
,'MMDDYYYY HH24MISS')
where c.id = 1
insert into demo.customers(id, name, address, valid_time_start)
values (1,'Patrick','1 Oracle Lane',to_date('06022016','MMDDYYYY'))
ID NAME ADDRESS VALID_FROM VALID_TILL
--- ------------------------- ------------------------- ---------- ----------
1 Patrick 1 Oxford Street 06-01-2016
2 Steven 57 Chicago Boulevard
36. 36
Temporal validity
select c.id
, c.name
, c.address
, to_char(c.valid_time_start, 'MM-DD-YYYY') valid_from
, to_char(c.valid_time_end, 'MM-DD-YYYY') valid_till
from demo.customers c
order by c.id
exec dbms_flashback_archive.enable_at_valid_time('CURRENT')
ID NAME ADDRESS VALID_FROM VALID_TILL
--- ------------------------- ------------------------- ---------- ----------
1 Patrick 1 Oracle Lane 06-02-2016
2 Steven 57 Chicago Boulevard
37. 37
Temporal validity
select c.id
, c.name
, c.address
, to_char(c.valid_time_start, 'MM-DD-YYYY') valid_from
, to_char(c.valid_time_end, 'MM-DD-YYYY') valid_till
from demo.customers c
order by c.id
exec dbms_flashback_archive.enable_at_valid_time('CURRENT')
ID NAME ADDRESS VALID_FROM VALID_TILL
--- ------------------------- ------------------------- ---------- ----------
1 Patrick 1 Oracle Lane 06-02-2016
2 Steven 57 Chicago Boulevard
exec dbms_flashback_archive.enable_at_valid_time('ASOF'
, to_date('06012016','MMDDYYYY'))
ID NAME ADDRESS VALID_FROM VALID_TILL
--- ------------------------- ------------------------- ---------- ----------
1 Patrick 1 Oxford Street 06-01-2016
2 Steven 57 Chicago Boulevard
38. 38
ID NAME ADDRESS VALID_FROM VALID_TILL
--- ------------------------- ------------------------- ---------- ----------
1 Patrick 1 Oracle Lane 06-02-2016
2 Steven 57 Chicago Boulevard
ID NAME ADDRESS VALID_FROM VALID_TILL
--- ------------------------- ------------------------- ---------- ----------
1 Patrick 1 Oxford Street 06-01-2016
2 Steven 57 Chicago Boulevard
Temporal validity
select c.id
, c.name
, c.address
, to_char(c.valid_time_start, 'MM-DD-YYYY') valid_from
, to_char(c.valid_time_end, 'MM-DD-YYYY') valid_till
from demo.customers c
order by c.id
exec dbms_flashback_archive.enable_at_valid_time('CURRENT')
exec dbms_flashback_archive.enable_at_valid_time('ASOF'
, to_date('06012016','MMDDYYYY'))
exec dbms_flashback_archive.enable_at_valid_time('ASOF'
, to_date('06022016','MMDDYYYY'))
39. 39
ID NAME ADDRESS VALID_FROM VALID_TILL
--- ------------------------- ------------------------- ---------- ----------
1 Patrick 1 Oxford Street 06-01-2016
1 Patrick 1 Oracle Lane 06-02-2016
2 Steven 57 Chicago Boulevard
ID NAME ADDRESS VALID_FROM VALID_TILL
--- ------------------------- ------------------------- ---------- ----------
1 Patrick 1 Oracle Lane 06-02-2016
2 Steven 57 Chicago Boulevard
Temporal validity
select c.id
, c.name
, c.address
, to_char(c.valid_time_start, 'MM-DD-YYYY') valid_from
, to_char(c.valid_time_end, 'MM-DD-YYYY') valid_till
from demo.customers c
order by c.id
exec dbms_flashback_archive.enable_at_valid_time('CURRENT')
exec dbms_flashback_archive.enable_at_valid_time('ASOF'
, to_date('06012016','MMDDYYYY'))
exec dbms_flashback_archive.enable_at_valid_time('ASOF'
, to_date('06022016','MMDDYYYY'))
exec dbms_flashback_archive.enable_at_valid_time('ALL')
temporal_validity.sql
47. 47
Virtual private database
create or replace function vpdfunc
(schemaname_in in varchar2
,tablename_in in varchar2) return varchar2
is
l_returnvalue varchar2(2000);
begin
case user
when 'BU' then l_returnvalue := 'DEPTNO in (10,30)';
when 'MP' then l_returnvalue := 'DEPTNO in (20,40)';
else l_returnvalue := '1=1';
end case;
return l_returnvalue;
end;
53. 53
Query result cache
insert into QueryResultCacheTable(id) values (1);
insert into QueryResultCacheTable(id) values (2);
insert into QueryResultCacheTable(id) values (3);
insert into QueryResultCacheTable(id) values (4);
insert into QueryResultCacheTable(id) values (5);
create or replace function veryslowfunction(id_in in
queryresultcachetable.id%type)
return queryresultcachetable.id%type as
begin
dbms_lock.sleep(1);
return id_in;
end;
set timing on
54. 54
Query result cache
select VerySlowFunction(qrct.id)
from QueryResultCacheTable qrct
select /*+ result_cache */
veryslowfunction(qrct.id)
from queryresultcachetable qrct
select /*+ result_cache */
veryslowfunction(qrct.id)
from queryresultcachetable qrct
Executed in 5.88 seconds
Executed in 11.664 seconds
Executed in 0.035 seconds
55. 55
Query result cache
create or replace function veryslowfunction(id_in in
queryresultcachetable.id%type)
return queryresultcachetable.id%type deterministic as
begin
dbms_lock.sleep(1);
return id_in;
end;
56. 56
Query result cache
select /*+ result_cache */
veryslowfunction(qrct.id)
from queryresultcachetable qrct
select /*+ result_cache */
veryslowfunction(qrct.id)
from queryresultcachetable qrct
Executed in 5.507 seconds
Executed in 0.038 seconds
58. 58
Query result cache
insert into FunctionResultCacheTable(id) values (1);
insert into FunctionResultCacheTable(id) values (2);
insert into FunctionResultCacheTable(id) values (3);
insert into FunctionResultCacheTable(id) values (1);
insert into FunctionResultCacheTable(id) values (2);
insert into FunctionResultCacheTable(id) values (3);
insert into FunctionResultCacheTable(id) values (1);
insert into FunctionResultCacheTable(id) values (2);
insert into FunctionResultCacheTable(id) values (3);
insert into FunctionResultCacheTable(id) values (1);
59. 59
Query result cache
select veryslowfunction(frct.id)
from FunctionResultCacheTable frct
Elapsed: 00:00:10.18
create or replace function veryslowfunction(id_in in
FunctionResultCacheTable.id%type)
return FunctionResultCacheTable.id%type as
begin
dbms_lock.sleep(1);
return id_in;
end;
Elapsed: 00:00:10.28
60. 60
create or replace function veryslowfunction(id_in in
FunctionResultCacheTable.id%type)
return FunctionResultCacheTable.id%type deterministic as
begin
dbms_lock.sleep(1);
return id_in;
end;
Query result cache
select veryslowfunction(frct.id)
from FunctionResultCacheTable frct
Elapsed: 00:00:10.18Elapsed: 00:00:10.28
Elapsed: 00:00:04.16Elapsed: 00:00:04.09
61. 61
create or replace function veryslowfunction(id_in in
FunctionResultCacheTable.id%type)
return FunctionResultCacheTable.id%type result_cache as
begin
dbms_lock.sleep(1);
return id_in;
end;
Query result cache
select veryslowfunction(frct.id)
from FunctionResultCacheTable frct
Elapsed: 00:00:10.18Elapsed: 00:00:10.28
Elapsed: 00:00:04.16Elapsed: 00:00:04.09
Elapsed: 00:00:00.00Elapsed: 00:00:03.15
Elapsed: 00:00:00.00
VirtualPrivateDatabase.sql
72. 72
Retrieve the top-3 earning employees
Special SQL features
select e.empno
,e.ename
,e.sal
from demo.emp e
where rownum < 4
order by e.sal desc
EMPNO ENAME SAL
----- ---------- ---------
7499 ALLEN 1600.00
7521 WARD 1250.00
7369 SMITH 800.00
73. 73
Retrieve the top-3 earning employees
Special SQL features
select e.empno
,e.ename
,e.sal
from (select e2.empno
,e2.ename
,e2.sal
from demo.emp e2
order by e2.sal desc) e
where rownum < 4
EMPNO ENAME SAL
----- ---------- ---------
7839 KING 5000.00
7788 SCOTT 3000.00
7902 FORD 3000.00
select e.empno
,e.ename
,e.sal
from demo.emp e
where rownum < 4
order by e.sal desc
EMPNO ENAME SAL
----- ---------- ---------
7499 ALLEN 1600.00
7521 WARD 1250.00
7369 SMITH 800.00
74. 74
Retrieve the top-3 earning employees
Special SQL features
with orderedbysal as
(select e.empno
,e.ename
,e.sal
from demo.emp e
order by e.sal desc)
select obs.empno
,obs.ename
,obs.sal
from orderedbysal obs
where rownum < 4
select e.empno
,e.ename
,e.sal
from demo.emp e
order by e.sal desc
fetch first 3 rows only
EMPNO ENAME SAL
----- ---------- ---------
7839 KING 5000.00
7788 SCOTT 3000.00
7902 FORD 3000.00
EMPNO ENAME SAL
----- ---------- ---------
7839 KING 5000.00
7788 SCOTT 3000.00
7902 FORD 3000.00
select e.empno
,e.ename
,e.sal
from (select e2.empno
,e2.ename
,e2.sal
from demo.emp e2
order by e2.sal desc) e
where rownum < 4
Syntax Candy
this
is rewritten as
75. 75
Retrieve the next 3 topearning employees
Special SQL features
with orderedbysal as
(select e.empno
,e.ename
,e.sal
,rownum rn
from demo.emp e
order by e.sal desc)
select obs.empno
,obs.ename
,obs.sal
from orderedbysal obs
where obs.rn between 4 and 6
order by obs.rn
EMPNO ENAME SAL
----- ---------- ---------
7566 JONES 2975.00
7654 MARTIN 1250.00
7698 BLAKE 2850.00
76. 76
Retrieve the next 3 topearning employees
Special SQL features
with orderedbysal as
(select e.empno
,e.ename
,e.sal
from demo.emp e
order by e.sal desc)
, orderedwithrownum as
(select obs.empno
, obs.ename
, obs.sal
, rownum rn
from orderedbysal obs)
select obr.empno
,obr.ename
,obr.sal
from orderedwithrownum obr
where obr.rn between 4 and 6
order by obr.rn
select e.empno
,e.ename
,e.sal
from demo.emp e
order by e.sal desc
offset 3 rows
fetch next 3 rows only
EMPNO ENAME SAL
----- ---------- ---------
7566 JONES 2975.00
7698 BLAKE 2850.00
7782 CLARK 2450.00
EMPNO ENAME SAL
----- ---------- ---------
7566 JONES 2975.00
7698 BLAKE 2850.00
7782 CLARK 2450.00
with orderedbysal as
(select e.empno
,e.ename
,e.sal
,rownum rn
from demo.emp e
order by e.sal desc)
select obs.empno
,obs.ename
,obs.sal
from orderedbysal obs
where obs.rn between 4 and 6
order by obs.rn
EMPNO ENAME SAL
----- ---------- ---------
7566 JONES 2975.00
7654 MARTIN 1250.00
7698 BLAKE 2850.00
paginating.sql
77. 77
Build PL/SQL in SQL
Special SQL features
create or replace function wavey(string_in in varchar2)
return varchar2
is
l_returnvalue varchar2(30);
begin
for indx in 1 .. length(string_in) loop
l_returnvalue := l_returnvalue ||
case mod(indx, 2)
when 0 then upper(substr(string_in, indx, 1))
else lower(substr(string_in, indx, 1))
end;
end loop;
return l_returnvalue;
end;
Function created
78. 78
Build PL/SQL in SQL
Special SQL features
select e.empno, e.ename, wavey(e.ename) wavey_ename, e.sal
from demo.emp e
EMPNO ENAME WAVEY_ENAME SAL
----- ---------- ------------ ---------
7369 SMITH sMiTh 800.00
7499 ALLEN aLlEn 1600.00
7521 WARD wArD 1250.00
7566 JONES jOnEs 2975.00
7654 MARTIN mArTiN 1250.00
7698 BLAKE bLaKe 2850.00
7782 CLARK cLaRk 2450.00
7788 SCOTT sCoTt 3000.00
7839 KING kInG 5000.00
7844 TURNER tUrNeR 1500.00
7876 ADAMS aDaMs 1100.00
7900 JAMES jAmEs 950.00
7902 FORD fOrD 3000.00
7934 MILLER mIlLeR 1300.00
14 rows selected
79. 79
Special SQL features
with function wavey(string_in in varchar2) return varchar2
is
l_returnvalue varchar2(30);
begin
for indx in 1 .. length(string_in) loop
l_returnvalue := l_returnvalue ||
case mod(indx, 2)
when 1 then upper(substr(string_in, indx, 1))
else lower(substr(string_in, indx, 1))
end;
end loop;
return l_returnvalue;
end;
select e.empno, e.ename, wavey(e.ename) wavey_ename, e.sal
from demo.emp e
EMPNO ENAME WAVEY_ENAME SAL
----- ---------- ------------ ---------
7369 SMITH SmItH 800.00
7499 ALLEN AlLeN 1600.00
…
7934 MILLER MiLlEr 1300.00
14 rows selected
plsql_in_with.sql
92. 93
Identity columns (12c)
create or replace trigger makesureitsfilled
before insert or update on tablewithuniquecolumn
for each row
begin
if :new.theuniquecolumn is null then
select sequenceforuniquecolumn.nextval
into :new.theuniquecolumn
from dual;
end if;
end;
93. 94
Identity columns (12c)
create or replace trigger makesureitsfilled
before insert or update on tablewithuniquecolumn
for each row
begin
if :new.theuniquecolumn is null then
:new.theuniquecolumn := sequenceforuniquecolumn.nextval;
end if;
end;
99. 100
Invisible columns (12c)
select * from show
I AM THE INVISIBLE MAN
- -- --- --------- ---
I am the invisible man
alter table show modify invisible invisible
100. 101
Invisible columns (12c)
select * from show
I AM THE INVISIBLE MAN
- -- --- --------- ---
I am the invisible man
alter table show modify invisible invisible
I AM THE MAN
- -- --- ---
I am the man
101. 102
Invisible columns (12c)
select * from show
I AM THE INVISIBLE MAN
- -- --- --------- ---
I am the invisible man
alter table show modify invisible invisible
I AM THE MAN
- -- --- ---
I am the man
select I, am, the, invisible, man
from show
IAmTheInvisibleColumn.sql
104. 105
Whitelisting PL/SQL program units
create or replace procedure onlywhitelisted
accessible by (normalprocedure)
is
begin
dbms_output.put_line('You are allowed');
end;
105. 106
Whitelisting PL/SQL program units
create or replace procedure onlywhitelisted
accessible by (normalprocedure)
is
begin
dbms_output.put_line('You are allowed');
end;
create or replace procedure
normalprocedure
is
begin
dbms_output.put_line
('normalprocedure calls
onlywhitelisted');
onlywhitelisted;
end;
106. 107
Whitelisting PL/SQL program units
create or replace procedure onlywhitelisted
accessible by (normalprocedure)
is
begin
dbms_output.put_line('You are allowed');
end;
create or replace procedure
normalprocedure
is
begin
dbms_output.put_line
('normalprocedure calls
onlywhitelisted');
onlywhitelisted;
end;
create or replace procedure
evilprocedure
is
begin
dbms_output.put_line
('evilprocedure calls
onlywhitelisted');
onlywhitelisted;
end;
107. 108
Whitelisting PL/SQL program units
create or replace procedure onlywhitelisted
accessible by (normalprocedure)
is
begin
dbms_output.put_line('You are allowed');
end;
create or replace procedure
normalprocedure
is
begin
dbms_output.put_line
('normalprocedure calls
onlywhitelisted');
onlywhitelisted;
end;
create or replace procedure
evilprocedure
is
begin
dbms_output.put_line
('evilprocedure calls
onlywhitelisted');
onlywhitelisted;
end;
create or replace procedure
evilprocedure
is
begin
dbms_output.put_line
('evilprocedure calls
normalprocedure');
normalprocedure;
end;
whitelisting_12c.sql
115. 116
• Whitelisting PL/SQL program units
– http://www.ccs-
inc.com/images/uploads/Yes_No_Shutterstock_83547358.j
pg
Credits
Editor's Notes
Not just a piece of iron
Has more features than you probably ever will use.
I will show you just a subset of the, sometimes newest, features in an overview.
Single Point Of Data Integrity
If you create a simple table
And add some data to it
If you create your business rules in you front end application, there will always be someone who will bypass your application and use something like SQL*Plus to access your database.
Let’s say you create a new application and add the protection to the table.
alter table demo.emp_protected add constraint sal_under_7500 check ((sal < 7500))
alter table demo.emp_protected add constraint pk_emp_protected primary key (empno)
alter table demo.emp_protected add constraint fk_emp_protected_dept foreign key (deptno) references demo.dept (deptno)
What did my data look like at a certain moment in time. No more need for journaling tables.
Every query already was a flashback query due to read-consistency. Based on the SCN value at the time of execution
What did my data look like at a certain moment in time. No more need for journaling tables.
Every query already was a flashback query due to read-consistency. Based on the SCN value at the time of execution
Now I move house.
Easy as doing an update to the row.
But this has to be done exactly on the day I move...
I would like to be able to put in the new data and have Oracle serve me the right rows
Let’s reset our situation…
Let's reset
Then we add a column to hold the validity of the record
Let's reset
Then we add a column to hold the validity of the record
Now we can update the ending validity for the record
And then we can add the new address with a starting date
Now we can update the ending validity for the record
And then we can add the new address with a starting date
Now we can update the ending validity for the record
And then we can add the new address with a starting date
Enable temporal validity
Enable temporal validity
Enable temporal validity
Enable temporal validity
only read and write data you are authorized for
only read and write data you are authorized for
There are two users. The business user and a maintenance person.
We have a database with a table containing sensitive data.
The business user is allowed to see all the data
But the maintenance person is only allowed to see the last four digits of the number (contract, credit card or what have you).
near zero downtime application upgrades or support for parallel database worlds
Every database needs upgrading due to new demands, new features, bugfixes etc.
If you want to do an application upgrade…
Adding new objects is no problem…
Dropping columns from tables is a problem. Pre 11G databases also cause a problem when adding columns to a table. In 11G the database now uses Fine Grained Dependency Tracking which means dependencies are now tracked at column level instead of object level.
Changing code is a problem. If you change the implementation of a program in a package (the package body), this should normally not cause any problems, but if you change the signature of a program (the package specification) then you cause invalidation of dependent objects.
Dropping tables is a problem. Chances are, near 100%, that this table is used by other objects, causing invalidation of those objects.
Application upgrade normally follows these steps:
Creation of new objects – No problems here
It becomes interesting when you start changing existing objects, drop redundant objects or convert/migrate existing objects.
You would want to have a parallel universe in which you can perform the upgrade without affecting the users and then, when you’re done, have the users us the new version.
If tables are not editionable, how do we cope with data? To handle issues with the table changes there are cross-edition-triggers. These are built to make sure data entered in one edition is presented correctly in the other edition.
Create Top-N queries
Create Top-N queries
Or paginate your result
Pros:
Faster especially on large datasets
Logic closer to it’s use
Cons:
No reuse of the logic
First you create the table. Note the last column. This is the virtual column which is calculated based on the values of other fields in the record.
First you create the table. Note the last column. This is the virtual column which is calculated based on the values of other fields in the record.
You can’t insert values for the virtual column
But when you select the values from the table, the column is calculated for you.
You can of course achieve the same effect using a view, but that requires an extra object.
If you create a table with a virtual column with an invalid calculation.
And add some data
If you try to execute a select * from the table you’ll get an error
If, however, you select all the individual columns except the virtual one your ok. The column value is only calculated when it is requested.
Remember the good old days? When you wanted a column to hold a unique value and always be filled?
First you created the table.
Then you created a sequence to be used for the unique column
Then you would create a trigger on the table…
Or if you are using a more recent version of the database it would look like this.
The trigger retrieves a value from the sequence and uses that to populate the unique column
Now we create a table with an identity column
No more need for a sequence
No more need for a trigger
Don’t know about you, but I think doing less work is better.