MySQL exposes a collection of tunable parameters and indicators that is frankly intimidating. But a poorly tuned MySQL server is a bottleneck for your PHP application scalability. This session shows how to do InnoDB tuning and read the InnoDB status report in MySQL 5.5.
MySQL exposes a collection of tunable parameters and indicators that is frankly intimidating. But a poorly tuned MySQL server is a bottleneck for your PHP application scalability. This session shows how to do InnoDB tuning and read the InnoDB status report in MySQL 5.5.
Conference: HP Big Data Conference 2015
Session: Real-world Methods for Boosting Query Performance
Presentation: "Extra performance out of thin air"
Presenter: Konstantine Krutiy, Principal Software Engineer / Vertica Whisperer
Company: Localytics
Description:
Learn how to get extra performance out of Vertica from areas you never expected.
This presentation will illustrate how you can improve performance of your Vertica cluster without extra budget.
All you need is ingenuity, knowledge of Vertica internals, and the ability to challenge conventional wisdom.
We will show you real world examples on gaining performance by eliminating unneeded work, eliminating unneeded system waits and making your system operate more efficiently.
Visit my blog http://www.dbjungle.com for more Vertica insights
The Challenges of Distributing Postgres: A Citus StoryHanna Kelman
Set theory forms the basis for relational algebra and relational databases, and SQL is the lingua franca of modern RDBMS’s. Even with all the attention given to NoSQL in recent years, the lion share of database usage remains relational. But until recently, nearly all relational database solutions have been limited to the resources of a single node. Not anymore.
This talk is about my team’s journey tackling the challenges of distributing SQL. Specifically in the context of my favorite (open source) database: Postgres. I believe that too many developers spend too much time worrying about scaling their databases. So at Citus Data, we created an extension to Postgres that enables developers to scale out compute, memory, and storage by distributing queries across a cluster of nodes.
This talk describes the distributed systems challenges we faced at Citus in scaling out Postgres—and how we addressed them. I’ll talk about how we use PostgreSQL’s extension APIs to parallelize queries in a distributed cluster. I’ll cover the architecture of a distributed query planner and specifically how the join order planner has to choose between broadcast, co-located, and repartition joins in order to minimize network I/O. And if there’s time, I’ll walk through the dynamic executor logic that we built. The end result: a distributed database and a lot less time spent worrying about scale.
Conference: HP Big Data Conference 2015
Session: Real-world Methods for Boosting Query Performance
Presentation: "Extra performance out of thin air"
Presenter: Konstantine Krutiy, Principal Software Engineer / Vertica Whisperer
Company: Localytics
Description:
Learn how to get extra performance out of Vertica from areas you never expected.
This presentation will illustrate how you can improve performance of your Vertica cluster without extra budget.
All you need is ingenuity, knowledge of Vertica internals, and the ability to challenge conventional wisdom.
We will show you real world examples on gaining performance by eliminating unneeded work, eliminating unneeded system waits and making your system operate more efficiently.
Visit my blog http://www.dbjungle.com for more Vertica insights
The Challenges of Distributing Postgres: A Citus StoryHanna Kelman
Set theory forms the basis for relational algebra and relational databases, and SQL is the lingua franca of modern RDBMS’s. Even with all the attention given to NoSQL in recent years, the lion share of database usage remains relational. But until recently, nearly all relational database solutions have been limited to the resources of a single node. Not anymore.
This talk is about my team’s journey tackling the challenges of distributing SQL. Specifically in the context of my favorite (open source) database: Postgres. I believe that too many developers spend too much time worrying about scaling their databases. So at Citus Data, we created an extension to Postgres that enables developers to scale out compute, memory, and storage by distributing queries across a cluster of nodes.
This talk describes the distributed systems challenges we faced at Citus in scaling out Postgres—and how we addressed them. I’ll talk about how we use PostgreSQL’s extension APIs to parallelize queries in a distributed cluster. I’ll cover the architecture of a distributed query planner and specifically how the join order planner has to choose between broadcast, co-located, and repartition joins in order to minimize network I/O. And if there’s time, I’ll walk through the dynamic executor logic that we built. The end result: a distributed database and a lot less time spent worrying about scale.
OpenWorld 2018 - Common Application Developer DisastersConnor McDonald
Two of the critical requirements of a database are:
- run fast
- data integrity
The database can achieve these things, but only as long as you understand the mechanisms correctly. If you don't, then things can go downhill fast.
Most important "trick" of performance instrumentationCary Millsap
This is the material from my 10-minute TED-style talk 2014-09-29 at OakTable World held in conjunction with Oracle OpenWorld 2014 in San Francisco. It explains the importance of assigning a unique id to the Oracle Database code path associated with each performance experience that users can have with your system
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...Marco Tusa
Performing simple DDL operations as ADD/DROP INDEX in a tightly connected cluster as PXC, can become a nightmare. Metalock will prevent Data modifications for long period of time and to bypass this, we need to become creative, like using Rolling schema upgrade or Percona online-schema-change. With NBO, we will be able to avoid such craziness at least for a simple operation like adding an index. In this brief talk I will illustrate what you should do to see the negative effect of NON using NBO, as well what you should do to use it correctly and what to expect out of it.
Slides from the ITOUG events in Rome and Milan 2020.
Most people think of the Flashback features in Oracle as the "In Case of Emergency" switch, to only be used when some catastrophe has occurred on your database. And while it is true that Flashback will definitely help you 3 seconds after you press the Commit button and you realise that you probably needed to have a WHERE clause on that "delete all rows from the SALES table" SQL statement. Or for when you run "drop table" on the Production database, when you were just so sure that you were logged onto the Test system. But Flashback is not only for those "Oh No!" moments. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
Latin America Tour 2019 - 10 great sql featuresConnor McDonald
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
Another year goes by, and most likely, another data access framework has been invented. It will claim to be the fastest, smartest way to talk to the database, and just like all those that came before it, it will not be. Because the best database access tool has been there for more than 30 years now, and that is PL/SQL. Although we all sometimes fall prey to the mindset of “Oh look, a shiny new tool, we should start using it," the performance and simplicity of PL/SQL remain unmatched. This session looks at the failings of other data access languages, why even a cursory knowledge of PL/SQL will make you a better developer, and how to get the most out of PL/SQL when it comes to database performance.
Slides from OOW13
Its the age old problem. The SQL statement that needs to run in 5 seconds - unfortunately runs in 10 seconds, or 10 minutes, or 10 hours. A SQL statement gets emailed to you with the simple subject title: "Make it faster". We'll start from this point in the process, and look at what you can do to tackle this common issue.
Analytic SQL functions, or "window functions have been there since 8.1.6, but they are still dramatically underused by application developers. This session looks at the syntax and usage of analytic functions, and how they can supercharge your SQL skillset.
Covers analytics from their inception in 8.1.6 all the through to enhancements in 18 and 19
Sangam 19 - Successful Applications on AutonomousConnor McDonald
The autonomous database offers insane levels of performance, but you won't be able to attain that if you are not constructing your SQL statements in a way that is scalable...and more importantly, secure from hacking
By expanding our knowledge of SQL facilities, we can let all the boring work be handled via SQL rather than a lot of middle-tier code, and we can get performance benefits as an added bonus. Here are some SQL techniques to solve problems that would otherwise require a lot of complex coding, freeing up your time to focus on the delivery of great applications.
APEX tour 2019 - successful development with autonomousConnor McDonald
The autonomous database offers insane levels of performance, but you won't be able to attain that if you are not constructing your SQL statements in a way that is scalable...and more importantly, secure from hacking
Apologies for most pics missing and awful layout...you can thank slideshare for that :-(
Slides from the APAC Groundbreakers Tour from Perth and Melbourne legs. This session covered the features in 18c, 19c and 20c, along with the new free database offerings from Oracle from OpenWorld 2019
Slides from OpenWorld. Flashback has been around for long time yet people assume it should entirely within the realm of the DBA. But with modern development techniques such as continuous integration/continuous deployment, flashback actually is a perfect for *developers*
Slides from the OpenWorld talk on read consistency. It is the feature that makes Oracle such a great database for performance and concurrency. But if misunderstood, it can lead to confusion for developers
Slides from OpenWorld 2019. Want to make sure your applications are slow, burn lots of CPU, and are easily broken into by hackers? Well...in reality, if you know how to do this, then you'll know how to avoid it.
Slides from Openworl 2019. A look at how to safely (and unsafely) kill sessions in the Oracle database, and how to perhaps avoid killing them altogether.
Flashback is not only for those "Oh No!" moments when we make a mistake. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
Latin America Tour 2019 - slow data and sql processingConnor McDonald
Well done! You've come up with the killer idea for 2020. You've got the best UI design anyone has ever seen! Your modern application ticks all the boxes — serverless, functional, Kubernetes, microservices, API-based, the list goes on. It runs on every OS and every type of device. But unfortunately, all of this counts for absolutely NOTHING if your data access is slow or buggy. But an Autonomous database will fix all that right? Only if you understand the fundamentals of how SQL is processed by the database. For novice developers, SQL can be hard to understand and sometimes totally hidden from view under an ORM. Let's peel back the covers to show how SQL is processed, how to avoid getting hacked, and how to get data back to your application in a snappy fashion.
OG Yatra - upgrading to the new 12c+ optimizerConnor McDonald
The 12c optimizer has a vast array of improvements, but of course, functionality changes means that your SQL plans might also change when you upgrade. This slidedeck covers what has changed, and how to ensure better more stable performance when you upgrade.
The skill set of a database practitioner is much more than what is read in the documentation, on blogs, or on StackOverflow. It is the knowledge from years of trial and error, experimentation, and sometimes painful failures. The problem is it takes time—a long, long time—to build that experience. This session aims to fast-track that path. Get a collection of hints, tips, features, and techniques picked up from the smartest people in the community.
OG Yatra - Flashback, not just for developersConnor McDonald
Flashback is not only for those "Oh No!" moments when we make a mistake. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
Kscope19 - Flashback: Good for Developers as well as DBAsConnor McDonald
Flashback is not only for those "Oh No!" moments when we make a mistake. It enables benefits for developers ranging from data consistency to continuous integration and data auditing. Tucked away in Enterprise Edition are six independent and powerful technologies that might just save your career—they will also open up a myriad of other benefits of well.
Kscope19 - Understanding the basics of SQL processingConnor McDonald
Better data access typically means understanding how SQL is processed by the database, and who has time for that? Let's peel back the covers to show how SQL is processed, how to avoid getting hacked, and how to get data back to your application in a snappy fashion.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
28. 28
SQL> create user DEV_TESTING
2 identified by MYPASS;
create user DEV_TESTING identified by MYPASS
*
ERROR at line 1:
ORA-65096: invalid common user or role name
?
41. 41
SQL> desc T
Name Null? Type
----------------------------- -------- -------------
OWNER NOT NULL VARCHAR2(30)
OBJECT_NAME NOT NULL VARCHAR2(30)
SQL> desc T_AUDIT
Name Null? Type
----------------------------- -------- --------------
AUDIT_DATE DATE
AUDIT_ACTION VARCHAR2(1)
OWNER NOT NULL VARCHAR2(30)
OBJECT_NAME NOT NULL VARCHAR2(30)
42. 42
SQL> create or replace
2 trigger AUDIT_TRG
3 after insert or update or delete on T
4 for each row
5 declare
6 v_action varchar2(1) := case when updating then 'U'
7 when deleting then 'D' else 'I' end;
8 begin
9 if updating or inserting then
10 insert into T_AUDIT
11 values (sysdate
12 ,v_action
13 ,:new.owner
14 ,:new.object_name);
15 else
16 insert into T_AUDIT
17 values (sysdate
18 ,v_action
19 ,:old.owner
20 ,:old.object_name);
21 end if;
22 end;
23 /
Trigger created.
47. 47
create or replace
package T_PKG is
type each_row is record ( action varchar2(1),
owner varchar2(30),
object_name varchar2(30)
);
type row_list is table of each_row
index by pls_integer;
g row_list;
end;
/
48. 48
create or replace
trigger AUDIT_TRG1
before insert or update or delete on T
begin
t_pkg.g.delete;
end;
/
49. 49
create or replace
trigger AUDIT_TRG2
after insert or update or delete on T
for each row
begin
if updating or inserting then
t_pkg.g(t_pkg.g.count+1).owner := :new.owner;
t_pkg.g(t_pkg.g.count).object_name := :new.object_name;
else
t_pkg.g(t_pkg.g.count).owner := :old.owner;
t_pkg.g(t_pkg.g.count).object_name := :old.object_name;
end if;
end;
/
50. 50
create or replace
trigger AUDIT_TRG3
after insert or update or delete on T
declare
v_action varchar2(1) :=
case when updating then 'U'
when deleting then 'D'
else 'I' end;
begin
forall i in 1 .. t_pkg.g.count
insert into T_AUDIT
values (
sysdate,
v_action,
t_pkg.g(i).owner,
t_pkg.g(i).object_name);
t_pkg.g.delete;
end;
/
51. 51
SQL> insert into T
2 select owner, object_name
3 from all_objects
4 where rownum <= 10000;
INSERT INTO T_AUDIT
VALUES
( SYSDATE, :B1 , :B2 , :B3 )
10000 rows created.
call count cpu elapsed disk query current
------- ------ ------- --------- -------- ---------- ----------
Parse 1 0.00 0.00 0 0 0
Execute 1 0.04 0.03 0 90 478
Fetch 0 0.00 0.00 0 0 0
------- ------ ------- --------- -------- ---------- ----------
total 2 0.04 0.03 0 90 478
64. SQL> alter table EMP add
2 CREATE_TS timestamp default systimestamp;
SQL> update EMP
2 set sal = sal*10
3 where empno = 7369;
SQL> delete from EMP
2 where empno = 7934;
SQL> update EMP
2 set job = 'SALES'
3 where ename = 'SMITH';
SQL> update EMP
2 set comm = 1000
3 where empno = 7369;
SQL> commit;
64
65. SQL> select empno, ename, job, sal, comm,
2 nvl(VERSIONS_STARTTIME,CREATE_TS) TS
3 ,nvl(VERSIONS_OPERATION,'I') OP
4 from EMP
5 versions between timestamp
6 timestamp '2014-02-11 20:12:00' and
7 systimestamp
8 order by empno;
EMPNO ENAME JOB SAL COMM TS OP
---------- ---------- --------- ---------- ---------- ------------ --
7369 SMITH CLERK 806 08.10.51 PM I
7369 SMITH SALES 8060 1000 08.12.10 PM U
7499 ALLEN SALESMAN 1606 300000000 08.10.51 PM I
7521 WARD SALESMAN 1256 500000000 08.10.51 PM I
7566 JONES MANAGER 2981 08.10.51 PM I
7900 JAMES CLERK 956 08.10.51 PM I
7902 FORD ANALYST 3006 08.10.51 PM I
7934 MILLER CLERK 1306 08.10.51 PM I
7934 MILLER CLERK 1306 08.12.10 PM D
...
65
109. SQL> select * from emp_temporal
2 order by empno, start_dt;
EMPNO NAME SAL DEPTNO START_DT END_DT
---------- --------- ---------- ---------- --------- ---------
7369 SMITH 806 20 01-JAN-14 01-MAR-14
7369 SMYTH 806 21 02-MAR-14 01-SEP-14
7369 SMYTH 950 21 02-SEP-14 01-OCT-14
7499 ALLEN 1606 30 01-JAN-14 01-MAR-14
7521 WARD 1256 30 01-JAN-14 01-MAR-14
7566 JONES 2981 20 01-JAN-14 01-MAR-14
7654 MARTIN 1256 30 01-JAN-14 01-MAR-14
7698 BLAKE 2856 30 01-JAN-14 01-MAR-14
110. SQL> select * from emp_temporal
2 as of period for valid_range '01-FEB-14';
EMPNO NAME SAL DEPTNO START_DT END_DT
---------- --------- --------- ---------- --------- --------
7369 SMITH 806 20 01-JAN-14 01-MAR-14
7499 ALLEN 1606 30 01-JAN-14 01-MAR-14
7521 WARD 1256 30 01-JAN-14 01-MAR-14
7566 JONES 2981 20 01-JAN-14 01-MAR-14
7654 MARTIN 1256 30 01-JAN-14 01-MAR-14
120. 120
DECODE(BITAND("A209"."PROPERTY",8192),8192,'NESTED TABLE','TABLE')
"OBJECT_TYPE",
2 "OBJECT_TYPE_ID",5 "SEGMENT_TYPE_ID",
"A209"."OBJ#" "OBJECT_ID",
"A209"."FILE#" "HEADER_FILE",
"A209"."BLOCK#" "HEADER_BLOCK",
"A209"."TS#" "TS_NUMBER" FROM "SYS"."TAB$" "A209" WHERE
BITAND("A209"."PROPERTY",1024)=0) UNI
ON ALL (SELECT 'TABLE PARTITION' "OBJECT_TYPE",19 "OBJECT_TYPE_ID",5
"SEGMENT_TYPE_ID",
"A208"."OBJ#" "OBJECT_ID",
"A208"."FILE#" "HEADER_FILE",
"A208"."BLOCK#" "HEADER_BLOCK",
"A208"."TS#" "TS_NUMBER" FROM "SYS"."TABPART$" "A208") UNION ALL
(SELECT 'CLUSTER' "OBJECT_TYPE",3 "OBJECT_TYPE_ID",5 "SEGMENT_TYPE_ID",
"A207"."OBJ#" "OBJECT_ID",
"A207"."FILE#" "HEADER_FILE",
"A207"."BLOCK#" "HEADER_BLOCK",
"A207"."TS#" "TS_NUMBER" FROM "SYS"."CLU$" "A207") UNION ALL
(SELECT DECODE("A206"."TYPE#",8,'LOBINDEX','INDEX') "OBJECT_TYPE",1
"OBJECT_TYPE_ID",6 "SEGMENT_TYPE_ID",
"A206"."OBJ#" "OBJECT_ID",
"A206"."FILE#" "HEADER_FILE",
"A206"."BLOCK#" "HEADER_BLOCK",
"A206"."TS#" "TS_NUMBER" FROM "SYS"."IND$" "A206" WHERE "A206"."TYPE#"=1 OR
"A206"."TYPE#"=2 OR
"A206"."TYPE#"=3 OR
"A206"."TYPE#"=4 OR
"A206"."TYPE#"=6 OR
"A206"."TYPE#"=7 OR
"A206"."TYPE#"=8 OR
"A206"."TYPE#"=9) UNION ALL
(SELECT 'INDEX PARTITION'
"OBJECT_TYPE",20 "OBJECT_TYPE_ID",6 "SEGMENT_TYPE_ID",
"A205"."OBJ#" "OBJECT_ID",
"A205"."FILE#" "HEADER_FILE",
"A205"."BLOCK#" "HEADER_BLOCK",
121. 121
"A205"."TS#" "TS_NUMBER" FROM "SYS"."INDPART$" "A205") UNION ALL
(SELECT 'LOBSEGMENT' "OBJECT_TYPE",21 "OBJECT_TYPE_ID",8 "SEGMENT_TYPE_ID",
"A204"."LOBJ#" "OBJECT_ID",
"A204"."FILE#" "HEADER_FILE",
"A204"."BLOCK#" "HEADER_BLOCK",
"A204"."TS#" "TS_NUMBER" FROM "SYS"."LOB$" "A204" WHERE
BITAND("A204"."PROPERTY",64)=0 OR
BITAND("A204"."PROPERTY",128)=128) UNION ALL
(SELECT 'TABLE SUBPARTITION' "OBJECT_TYPE",34 "OBJECT_TYPE_ID",5
"SEGMENT_TYPE_ID",
"A203"."OBJ#" "OBJECT_ID",
"A203"."FILE#" "HEADER_FILE",
"A203"."BLOCK#" "HEADER_BLOCK",
"A203"."TS#" "TS_NUMBER" FROM "SYS"."TABSUBPART$" "A203") UNION ALL
(SELECT 'INDEX SUBPARTITION' "OBJECT_TYPE",35 "OBJECT_TYPE_ID",6
"SEGMENT_TYPE_ID",
"A202"."OBJ#" "OBJECT_ID",
"A202"."FILE#" "HEADER_FILE",
"A202"."BLOCK#" "HEADER_BLOCK",
"A202"."TS#" "TS_NUMBER" FROM "SYS"."INDSUBPART$" "A202") UNION ALL
(SELECT DECODE("A201"."FRAGTYPE$",'P','LOB PARTITION','LOB SUBPARTITION')
"OBJECT_TYPE",DECODE("A201"."FRAGTYPE$",'P',40,41) "OBJECT_TYPE_ID",8
"SEGMENT_TYPE_ID",
"A201"."FRAGOBJ#" "OBJECT_ID",
"A201"."FILE#" "HEADER_FILE",
"A201"."BLOCK#" "HEADER_BLOCK",
"A201"."TS#" "TS_NUMBER" FROM "SYS"."LOBFRAG$" "A201")) "A196",
"SYS"."SEG$" "A195",
"SYS"."FILE$" "A194" WHERE "A195"."FILE#"="A196"."HEADER_FILE" AND
"A195"."BLOCK#"="A196"."HEADER_BLOCK" AND
"A195"."TS#"="A196"."TS_NUMBER" AND
"A195"."TS#"="A197"."TS#" AND
"A198"."OBJ#"="A196"."OBJECT_ID" AND
"A198"."OWNER#"="A199"."USER#"(+) AND
"A195"."TYPE#"="A196"."SEGMENT_TYPE_ID" AND
"A198"."TYPE#"="A196"."OBJECT_TYPE_ID" AND
"A195"."TS#"="A194"."TS#" AND
122. 122
"A195"."FILE#"="A194"."RELFILE#") UNION ALL
(SELECT NVL("A193"."NAME",'SYS') "OWNER",
"A191"."NAME" "SEGMENT_NAME",NULL "PARTITION_NAME",
DECODE("A190"."TYPE#",1,'ROLLBACK',10,'TYPE2 UNDO') "SEGMENT_TYPE",
"A190"."TYPE#" "SEGMENT_TYPE_ID",NULL "SEGMENT_SUBTYPE",
"A192"."TS#" "TABLESPACE_ID",
"A192"."NAME" "TABLESPACE_NAME",
"A192"."BLOCKSIZE" "BLOCKSIZE",
"A189"."FILE#" "HEADER_FILE",
"A190"."BLOCK#" "HEADER_BLOCK",
"A190"."BLOCKS"*"A192"."BLOCKSIZE" "BYTES",
"A190"."BLOCKS" "BLOCKS",
"A190"."EXTENTS" "EXTENTS",
"A190"."INIEXTS"*"A192"."BLOCKSIZE" "INITIAL_EXTENT",
"A190"."EXTSIZE"*"A192"."BLOCKSIZE" "NEXT_EXTENT",
"A190"."MINEXTS" "MIN_EXTENTS",
"A190"."MAXEXTS" "MAX_EXTENTS",DECODE(BITAND("A190"."SPARE1",4194304),4194304,
"A190"."BITMAPRANGES",NULL) "MAX_SIZE",NULL "RETENTION",NULL "MINRETENTION",
"A190"."EXTPCT"
"PCT_INCREASE",DECODE(BITAND("A192"."FLAGS",32),32,TO_NUMBER(NULL),DECODE("A190"
."LISTS",0,1,
"A190"."LISTS"))
"FREELISTS",DECODE(BITAND("A192"."FLAGS",32),32,TO_NUMBER(NULL),
DECODE("A190"."GROUPS",0,1,"A190"."GROUPS")) "FREELIST_GROUPS",
"A190"."FILE#" "RELATIVE_FNO",BITAND("A190"."CACHEHINT",3)
"BUFFER_POOL_ID",BITAND("A190"."CACHEHINT",12)/4 "FLASH_CACHE",
BITAND("A190"."CACHEHINT",48)/16 "CELL_FLASH_CACHE",NVL("A190"."SPARE1",0)
"SEGMENT_FLAGS",
"A191"."US#" "SEGMENT_OBJD" FROM "SYS"."USER$" "A193","SYS"."TS$" "A192",
"SYS"."UNDO$" "A191",
"SYS"."SEG$" "A190",
"SYS"."FILE$" "A189" WHERE "A190"."FILE#"="A191"."FILE#" AND
"A190"."BLOCK#"="A191"."BLOCK#" AND
"A190"."TS#"="A191"."TS#" AND
"A190"."TS#"="A192"."TS#" AND
"A190"."USER#"="A193"."USER#"(+) AND
("A190"."TYPE#"=1 OR
"A190"."TYPE#"=10) AND
"A191"."STATUS$"<>1 AND
123. 123
"A191"."TS#"="A189"."TS#" AND
"A191"."FILE#"="A189"."RELFILE#") UNION ALL
(SELECT NVL("A188"."NAME",'SYS') "OWNER",
TO_CHAR("A185"."FILE#")||'.'||TO_CHAR("A186"."BLOCK#") "SEGMENT_NAME",NULL
"PARTITION_NAME",
DECODE("A186"."TYPE#",2,'DEFERRED ROLLBACK',3,
'TEMPORARY',4,'CACHE',9,'SPACE HEADER','UNDEFINED') "SEGMENT_TYPE",
"A186"."TYPE#" "SEGMENT_TYPE_ID",NULL "SEGMENT_SUBTYPE",
"A187"."TS#" "TABLESPACE_ID",
"A187"."NAME" "TABLESPACE_NAME",
"A187"."BLOCKSIZE" "BLOCKSIZE",
"A185"."FILE#" "HEADER_FILE",
"A186"."BLOCK#" "HEADER_BLOCK",
"A186"."BLOCKS"*"A187"."BLOCKSIZE" "BYTES",
"A186"."BLOCKS" "BLOCKS",
"A186"."EXTENTS" "EXTENTS",
"A186"."INIEXTS"*"A187"."BLOCKSIZE" "INITIAL_EXTENT",
"A186"."EXTSIZE"*"A187"."BLOCKSIZE" "NEXT_EXTENT",
"A186"."MINEXTS" "MIN_EXTENTS",
"A186"."MAXEXTS" "MAX_EXTENTS",DECODE(BITAND("A186"."SPARE1",4194304),4194304,
"A186"."BITMAPRANGES",NULL) "MAX_SIZE",NULL
"RETENTION",NULL
"MINRETENTION",DECODE(BITAND("A187"."FLAGS",3),1,TO_NUMBER(NULL),
"A186"."EXTPCT")
"PCT_INCREASE",DECODE(BITAND("A187"."FLAGS",32),32,TO_NUMBER(NULL),DECODE("A186"
."LISTS",0,1,
"A186"."LISTS"))
"FREELISTS",DECODE(BITAND("A187"."FLAGS",32),32,TO_NUMBER(NULL),DECODE("A186"."G
ROUPS",0,1,
"A186"."GROUPS")) "FREELIST_GROUPS",
"A186"."FILE#" "RELATIVE_FNO",BITAND("A186"."CACHEHINT",3)
"BUFFER_POOL_ID",BITAND("A186"."CACHEHINT",12)/4
"FLASH_CACHE",BITAND("A186"."CACHEHINT",48)/16
"CELL_FLASH_CACHE",NVL("A186"."SPARE1",0) "SEGMENT_FLAGS",
"A186"."HWMINCR" "SEGMENT_OBJD" FROM "SYS"."USER$"
"A188",
"SYS"."TS$" "A187",
"SYS"."SEG$" "A186",
"SYS"."FILE$" "A185" WHERE "A186"."TS#"="A187"."TS#" AND
"A186"."USER#"="A188"."USER#"(+) AND
"A186"."TYPE#"<>1 AND
"A186"."TYPE#"<>5 AND
124. 124
"A186"."TYPE#"<>6 AND
"A186"."TYPE#"<>8 AND
"A186"."TYPE#"<>10 AND
"A186"."TYPE#"<>11 AND
"A186"."TS#"="A185"."TS#"
AND
"A186"."FILE#"="A185"."RELFILE#") UNION ALL
(SELECT NVL("A184"."NAME",'SYS') "OWNER",'HEATMAP' "SEGMENT_NAME",
NULL "PARTITION_NAME",'SYSTEM STATISTICS' "SEGMENT_TYPE",
"A182"."TYPE#" "SEGMENT_TYPE_ID",NULL "SEGMENT_SUBTYPE",
"A183"."TS#" "TABLESPACE_ID",
"A183"."NAME" "TABLESPACE_NAME",
"A183"."BLO
CKSIZE" "BLOCKSIZE",
"A181"."FILE#" "HEADER_FILE",
"A182"."BLOCK#" "HEADER_BLOCK",
"A182"."BLOCKS"*"A183"."BLOCKSIZE" "BYTES",
"A182"."BLOCKS" "BLOCKS",
"A182"."EXTENTS" "EXTENTS",
"A182"."INIEXTS"*"A183"."BLOCKSIZE" "INITIAL_EXTENT",
"A182"."EXTSIZE"*"A183"."BLOCKSIZE" "NEXT_EXTENT",
"A182"."MINEXTS" "MIN_EXTENTS",
"A182"."MAXEXTS" "MAX_EXTENTS",DECODE(BITAND("A182"."SPARE1",4194304),4194304,
"A182"."BITMAPRANGES",NULL) "MAX_SIZE",NULL "RETENTION",NULL
"MINRETENTION",DECODE(BITAND("A183"."FLAGS",3),1,TO_NUMBER(NULL),
"A182"."EXTPCT")
"PCT_INCREASE",DECODE(BITAND("A183"."FLAGS",32),32,TO_NUMBER(NULL),DEC
ODE("A182"."LISTS",0,1,
"A182"."LISTS"))
"FREELISTS",DECODE(BITAND("A183"."FLAGS",32),32,TO_NUMBER(NULL),DECODE("A182"."G
ROUPS",0,1,
"A182"."GROUPS")) "FREELIST_GROUPS",
"A182"."FILE#" "RELATIVE_FNO",BITAND("A182"."CACHEHINT",3) "BUFFER_POOL_ID",
BITAND("A182"."CACHEHINT",12)/4 "FLASH_CACHE",BITAND("A18
2"."CACHEHINT",48)/16 "CELL_FLASH_CACHE",NVL("A182"."SPARE1",0) "SEGMENT_FLAGS",
"A182"."HWMINCR" "SEGMENT_OBJD" FROM "SYS"."USER$" "A184",
125. 125
"SYS"."TS$" "A183",
"SYS"."SEG$" "A182",
"SYS"."FILE$" "A181" WHERE "A182"."TS#"="A183"."TS#" AND
"A182"."USER#"="A184"."USER#"(+) AND
"A182"."TYPE#"=11 AND
"A182"."TS#"="A181"."TS#" AND
"A182"."FILE#"="A181"."RELFILE#")) "A4") "A3", (SELECT "A5"."NAME" "OWNER",
"A6"."NAME" "OBJECT_NAME",
"A6"."SUBNAME" "SUBOBJECT_NAME",
"A6"."OBJ#" "OBJECT_ID",
"A6"."DATAOBJ#" "DATA_OBJECT_ID",DECODE("A6"."TYPE#",0,'NEXT OBJECT',1,'INDEX',
2,'TABLE',3,'CLUSTER',4,'VIEW',5,'SYNONYM
',6,'SEQUENCE',7,'PROCEDURE',8,'FUNCTION',9,'PACKAGE',11,'PACKAGE BODY',12,
'TRIGGER',13,'TYPE',14,'TYPE BODY',19,'TABLE PARTITION',20,'INDEX PARTITION',
21,'LOB',22,'LIBRARY',23,'DIRECTORY',24,'QUEUE',28,'JAVA SOURCE',29,'JAVA CLASS',30,
'JAVA RESOURCE',32,'INDEXTYPE',33,'OPERATOR',34,
'TABLE SUBPARTITION',35,'INDEX SUBPARTITION',40,'LOB PARTITION',41,'LOB SUBPARTITION',42
,NVL( (SELECT 'REWRITE EQUIVALENCE' "'REWRITEEQUIVALENCE'" FROM SYS."SUM$" "A52" WHERE
"A52"."OBJ#"="A6"."OBJ#" AND
BITAND("A52"."XPFLAGS",8388608)=8388608),'MATERIALIZED VIEW'),43,'DIMENSION',
44,'CONTEXT',46,'RULE SET',47,'RESOURCE PLAN',48,'CONSUMER GROUP',55,'XML SCHEMA',56,'JAVA
DATA',57,'EDITION',59,'RULE',
60,'CAPTURE',61,'APPLY',62,'EVALUATION CONTEXT',66,'JOB',67,'PROGRAM',68,'JOB CLASS',69,
'WINDOW',72,'SCHEDULER GROUP',74,'SCHEDULE',79,'CHAIN',81,'FILE GROUP',82,'MINING
MODEL',87,'ASSEMBLY',90,'CREDENTIAL',92,'CUBE
DIMENSION',93,'CUBE',94,'MEASURE FOLDER',95,'CUBE BUILD PROCESS',100,'FILE WATCHER',
101,'DESTINATION',114,'SQL TRANSLATION PROFILE',115,'UNIFIED AUDIT POLICY','UNDEFINED')
"OBJECT_TYPE",
"A6"."CTIME" "CREATED",
"A6"."MTIME" "LAST_DDL_TIME",TO_CHAR("A6"."STIME",'YYYY-MM-DD:HH24:MI:SS') "TIMESTAMP",DEC
ODE("A6"."STATUS",0,'N/A',1,'VALID','INVALID')
"STATUS",DECODE(BITAND("A6"."FLAGS",2),0,'N',2,'Y','N')
"TEMPORARY",DECODE(BITAND("A6"."FLAGS",4),0,'N',4,'Y','N')
"GENERATED",DECODE(BITAND("A6"."FLAGS",16),0,'N',16,'Y','N') "SECONDARY",
"A6"."NAMESPACE" "NAMESPACE",
"A6"."DEFINING_EDITION" "EDITION_NAM
E",DECODE(BITAND("A6"."FLAGS",196608),65536,'METADATA LINK',131072,'OBJECT LINK','NONE')
"SHARING",
CASE WHEN ("A6"."TYPE#"=4 OR
"A6"."TYPE#"=5 OR
128. SQL> variable c clob
SQL> begin
2 dbms_utility.expand_sql_text(
3 q'{select * from emp_temporal
4 as of period for valid_range '01-FEB-14'}',:c);
6 end;
7 /
129. SQL> print c
C
------------------------------------------------
SELECT
"A1"."EMPNO"
...
FROM (
SELECT "A2"."EMPNO" "EMPNO",
"A2"."NAME"
FROM "SCOTT"."EMP_TEMPORAL" "A2"
WHERE ("A2"."START_DT" IS NULL
OR "A2"."START_DT"<='01-FEB-14')
AND ("A2"."END_DT" IS NULL
OR "A2"."END_DT">'01-FEB-14')
) "A1"
131. SQL> desc EMPLOYEE
Name Null? Type
------------------------- -------- ---------------
EMPNO NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(12,2)
DEPTNO NUMBER(2)
SQL> alter table EMPLOYEE add period for TIME_RANGE;
Table altered.
132. SQL> desc EMPLOYEE
Name Null? Type
------------------------- -------- ---------------
EMPNO NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(12,2)
DEPTNO NUMBER(2)
still looks the same ?
133. SQL> select column_name , hidden_column
2 from user_tab_cols
3 where table_name = 'EMPLOYEE'
4 order by column_id;
COLUMN_NAME HIDDEN_COLUMN
------------------------------ -------------
EMPNO NO
ENAME NO
JOB NO
MGR NO
HIREDATE NO
SAL NO
COMM NO
DEPTNO NO
TIME_RANGE_START YES
TIME_RANGE_END YES
TIME_RANGE YES
143. SQL> update EMP_TEMPORAL
2 as of period for valid_range '01-FEB-14'
3 set sal = 10
4 where empno = 20;
as of period for valid_range '01-FEB-14'
*
ERROR at line 2:
ORA-08187: snapshot expression not allowed here
144. SQL> update (
2 select * from EMP_TEMPORAL
3 as of period for valid_range '01-FEB-14'
4 where empno = 7369
5 )
6 set sal = 10;
1 rows updated.
145. "Temporal validity is not supported
with a multitenant container
database"
(12.1.0.1)
150. SQL> create table T
2 ( x int,
3 y int);
Table created.
SQL> create index IX on T (x+0);
Index created.
SQL> alter table T set unused column Y;
Table altered.
151. SQL> select column_name, data_default
2 from USER_TAB_COLS
3 where table_name = 'T'
4 order by column_id;
COLUMN_NAME DATA_DEFAULT
------------------------------ ----------------
X
SYS_NC00003$ "X"+0
SYS_C00002_14020720:50:28$
153. SQL> create table T ( c1 int, c2 int, c3 int );
SQL> desc T
Name Null? Type
---------------------------- -------- -------
C1 NUMBER(38)
C2 NUMBER(38)
C3 NUMBER(38)
SQL> alter table T modify c1 invisible;
SQL> desc T
Name Null? Type
---------------------------- -------- -------
C2 NUMBER(38)
C3 NUMBER(38)
155. SQL> alter table T modify c1 visible;
SQL> desc T
Name Null? Type
---------------------------- -------- -------
C2 NUMBER(38)
C3 NUMBER(38)
C1 NUMBER(38)
157. SQL> desc T
Name Null? Type
---------------------- -------- --------------------
C1 NUMBER(38)
C2 NUMBER(38)
C3 NUMBER(38)
SQL> create or replace
2 procedure P is
3 begin
4 insert into T values (1,10,100);
6 end;
SQL> exec P
SQL> select * from T;
C1 C2 C3
---------- ---------- ----------
1 10 100
158. SQL> alter table T modify c1 invisible;
Table altered.
SQL> alter table T modify c1 visible;
Table altered.
SQL> desc T
Name Null? Type
------------------- -------- --------------------
C2 NUMBER(38)
C3 NUMBER(38)
C1 NUMBER(38)
166. SQL> alter table T add c3 int;
Table altered.
SQL> create or replace
2 procedure NEW_APP is
3 begin
4 for i in ( select c1,c2,c3 from T )
5 loop
6 dbms_output.put_line(i.c1);
7 dbms_output.put_line(i.c2);
8 dbms_output.put_line(i.c3);
9 end loop;
10 end;
11 /
Procedure created.
170. SQL> exec BAD_APP
BEGIN BAD_APP; END;
*
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00905: object SCOTT.BAD_APP is invalid
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
173. SQL> create table T ( c1 int, c2 int invisible);
Table created.
SQL> desc T
Name Null? Type
------------------------- -------- ------------------
C1 NUMBER(38)
SQL> set colinvisible ON
SQL> desc T
Name Null? Type
------------------------- -------- -----------
C1 NUMBER(38)
C2 (INVISIBLE) NUMBER(38)
174.
175. SQL> alter table T add "C3 (INVISIBLE)" int;
Table altered.
SQL> desc T
Name Null? Type
------------------------- -------- ------------
C1 NUMBER(38)
C3 (INVISIBLE) NUMBER(38)
C2 (INVISIBLE) NUMBER(38)
181. 181
SQL> alter table child modify
2 constraint CHILD_FK on delete cascade;
ERROR at line 1:
ORA-00933: SQL command not properly ended
SQL> alter table child add
2 constraint NEW_FK foreign key ( p )
3 references parent ( p ) on delete cascade;
ERROR at line 1:
ORA-02275: such a referential constraint already
exists in the table
195. Lyrics
"Players gonna play, play, play, play, play
Haters gonna hate, hate, hate, hate, hate
Gonna shake, shake, shake, shake, shake
Shake it off, Shake it off
Fakers gonna fake, fake, fake, fake, fake
Shake it off, I shake it off,
I, I, I shake it off, I shake it off,
I, I, I shake it off, I shake it off,
I, I, I shake it off, I shake it off
Hey, hey, hey
Yeah ohhh
Shake it off, I shake it off,
I, I, I shake it off, I shake it off,
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off"
207. 207
SQL> create sequence seq;
Sequence created.
SQL> create table T ( pk number , c1 int);
Table created.
SQL> create or replace
2 trigger FILL_IN_PK
3 before insert on T
4 for each row
5 begin
6 select seq.nextval into :new.pk from dual;
7 end;
8 /
Trigger created.
208. 208
SQL> insert into T values (10,20);
1 row created.
SQL> select * from T;
PK C1
---------- ----------
1 20
210. 210
SQL> create or replace
2 trigger FILL_IN_PK
3 before insert on T
4 for each row
5 when ( new.pk is null )
6 begin
7 select seq.nextval into :new.pk from dual;
8 end;
9 /
SQL> insert into T values (20,20);
SQL> select * from T;
PK C1
---------- ----------
1 20
20 20
221. 221
SQL> drop table t purge;
Table dropped.
SQL> create table T
2 ( pk number generated as identity ,
3 c1 int);
Table created.
SQL> select object_id, object_name,
2 object_type from user_objects;
OBJECT_ID OBJECT_NAME OBJECT_TYPE
---------- ------------------ ------------
414914 T TABLE
414915 ISEQ$$_414914 SEQUENCE
222. 222
SQL> @pr "select * from user_sequences"
SEQUENCE_NAME : ISEQ$$_414914
MIN_VALUE : 1
MAX_VALUE : 9999999999999999999999
INCREMENT_BY : 1
CYCLE_FLAG : N
ORDER_FLAG : N
CACHE_SIZE : 20
LAST_NUMBER : 1
PARTITION_COUNT :
SESSION_FLAG : N
KEEP_VALUE : N
223. 223
SQL> create table T
2 ( pk number
3 generated as identity (cache 1000)
4 , c1 int);
Table created.
SQL> insert into T values (1,2);
insert into T values (1,2)
*
ERROR at line 1:
ORA-32795: cannot insert into a generated always
identity column
229. 229
SQL> create or replace
2 procedure ins(p_pk int, p_status varchar2) is
3 begin
4 if p_status is null then
5 insert into T (pk, status)
6 values (p_pk, default );
7 else
8 insert into T (pk, status)
9 values (p_pk, p_status );
10 end if;
11 end;
12 /
Procedure created.
234. SQL> create or replace procedure P is
2 x int;
3 y int;
4 begin
5
6 x := 10;
7
8 select count(*)
9 into y
10 from all_objects
11 where object_name = 'NUFFIN';
12
13 x := x / y;
14
15 select rownum
16 into x
17 from dual;
18
19 end;
20 /
Procedure created.
235. SQL> exec P;
BEGIN P; END;
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
ORA-06512: at "SCOTT.P", line 13
ORA-06512: at line 1
236. SQL> create or replace procedure P is
2 x int;
3 y int;
4 begin
5
6 x := 10;
7
8 select count(*)
9 into y
10 from all_objects
11 where object_name = 'NUFFIN';
12
13 x := x / y;
14
15 select rownum
16 into x
17 from dual;
18
19 end;
20 /
Procedure created.
238. SQL> create or replace procedure P is
2 x int;
3 y int;
4 begin
5
6 x := 10;
7
8 select count(*)
9 into y
10 from all_objects
11 where object_name = 'NUFFIN';
12
13 x := x / y;
14
15 select rownum
16 into x
17 from dual;
18
19 exception
20 when others then
21 log_error;
22 raise;
23 end;
24 /
239. SQL> exec P;
BEGIN P; END;
*
ERROR at line 1:
ORA-01476: divisor is equal to
zero
ORA-06512: at "SCOTT.P", line 22
240. SQL> create or replace procedure P is
2 x int;
3 y int;
4 begin
5
6 x := 10;
7
8 select count(*)
9 into y
10 from all_objects
11 where object_name = 'NUFFIN';
12
13 x := x / y;
14
15 select rownum
16 into x
17 from dual;
18
19 exception
20 when others then
21 log_error;
22 raise;
23 end;
24 /
?
241. SQL> create or replace procedure P is
2 x int;
3 y int;
4 begin
5
6 x := 10;
7
8 select count(*)
9 into y
10 from all_objects
11 where object_name = 'NUFFIN';
12
13 x := x / y;
14
15 select rownum
16 into x
17 from dual;
18
19 exception
20 when others then
21 dbms_output.put_line(
22 dbms_utility.format_call_stack);
23 raise;
24 end;
25 /
242. SQL> exec P;
----- PL/SQL Call Stack -----
object line object
handle number name
0x14b1e3900 21 procedure SCOTT.P
0x12521b8d8 1 anonymous block
BEGIN P; END;
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
ORA-06512: at "SCOTT.P", line 23
ORA-06512: at line 1
243. SQL> create or replace procedure P is
2 x int;
3 y int;
4 l_debug varchar2(4000);
5 begin
6 l_debug := dbms_utility.format_call_stack;
7
8 x := 10;
9
10 l_debug := dbms_utility.format_call_stack;
11 select count(*)
12 into y
13 from all_objects
14 where object_name = 'NUFFIN';
15
16 l_debug := dbms_utility.format_call_stack;
17 x := x / y;
18
19 l_debug := dbms_utility.format_call_stack;
20 select rownum
21 into x
22 from dual;
23
24 exception
25 when others then
26 dbms_output.put_line(l_debug);
27 raise;
28 end;
244. SQL> exec P;
----- PL/SQL Call Stack -----
object line object
handle number name
0x14b1e3900 16 procedure SCOTT.P
0x12521b8d8 1 anonymous block
BEGIN P; END;
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
ORA-06512: at "SCOTT.P", line 27
ORA-06512: at line 1
245. SQL> create or replace procedure P is
2 x int;
3 y int;
4 l_debug varchar2(4000);
5 begin
6 l_debug := dbms_utility.format_call_stack;
7
...
24 exception
25 when others then
26 l_debug := substr(l_debug,instr(l_debug,chr(10),1,3));
27 l_debug := regexp_replace(l_debug,chr(10)||'.*$');
28 dbms_output.put_line(l_debug);
29 raise;
30 end;
31 /
246. SQL> exec P;
0x14b1e3900 16 procedure SCOTT.P
BEGIN P; END;
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
ORA-06512: at "SCOTT.P", line 29
ORA-06512: at line 1
249. SQL> create or replace procedure P is
2 x int;
3 y int;
4 l_debug varchar2(4000);
5 begin
6 l_debug := dbms_utility.format_call_stack;
7
8 x := 10;
9
...
23
24 exception
25 when others then
26 dbms_output.put_line(
27 DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );
28 raise;
29 end;
250. SQL> exec P;
ORA-06512: at "SCOTT.P", line 17
BEGIN P; END;
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
ORA-06512: at "SCOTT.P", line 27
ORA-06512: at line 1
253. SQL> create or replace procedure P is
2 x int;
3 y int;
4 begin
5 x := 10;
6
...
17
18 exception
19 when others then
20 for i in 1 .. utl_call_stack.dynamic_depth loop
21 dbms_output.put_line(
22 utl_call_stack.unit_line(i)||'-'||
23 utl_call_stack.concatenate_subprogram(
24 utl_call_stack.subprogram(i))
25 );
26 end loop;
27 raise;
28 end;
254. SQL> exec P;
21-P
1-__anonymous_block
BEGIN P; END;
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
ORA-06512: at "SCOTT.P", line 27
ORA-06512: at line 1
257. SQL> create or replace package body PKG is
2
3 procedure p is
...
19
20 exception
21 when others then
22 for i in 1 .. utl_call_stack.dynamic_depth loop
23 dbms_output.put_line(
24 utl_call_stack.unit_line(i)||'-'||
25 utl_call_stack.concatenate_subprogram(
26 utl_call_stack.subprogram(i))
27 );
28 end loop;
29 raise;
30 end;
31
32 procedure p1 is begin p; end;
33 procedure p2 is begin p1; end;
34 procedure p3 is begin p2; end;
35
36 end;
37 /
258. SQL> exec pkg.p3
23-PKG.P
32-PKG.P1
33-PKG.P2
34-PKG.P3
1-__anonymous_block
BEGIN pkg.p3; END;
*
ERROR at line 1:
ORA-01476: divisor is equal to
zero
ORA-06512: at "SCOTT.PKG", line 29
ORA-06512: at "SCOTT.PKG", line 32
ORA-06512: at "SCOTT.PKG", line 33
ORA-06512: at "SCOTT.PKG", line 34
ORA-06512: at line 1
260. SQL> exec pkg.p3
23-PKG.P
32-PKG.P1
33-PKG.P2
34-PKG.P3
1-__anonymous_block
BEGIN pkg.p3; END;
subprogram !!!
*
ERROR at line 1:
ORA-01476: divisor is equal to
zero
ORA-06512: at "SCOTT.PKG", line 29
ORA-06512: at "SCOTT.PKG", line 32
ORA-06512: at "SCOTT.PKG", line 33
ORA-06512: at "SCOTT.PKG", line 34
ORA-06512: at line 1
272. SQL> create or replace
2 function my_initcap(p_string varchar2) return varchar2 is
3 l_string varchar2(1000) := p_string;
4 begin
5 if regexp_like(l_string,'(Mac[A-Z]|Mc[A-Z])') then
6 null;
7 elsif l_string like '''%' then
8 null;
9 else
10 l_string := initcap(l_string);
11 if l_string like '_''S%' then
12 null;
13 else
14 l_string := replace(l_string,'''S','''s');
15 end if;
16 end if;
17
18 return l_string;
19 end;
20 /
Function created.
280. SQL> select
2 case
3 when regexp_like(vendor,'(Mac[A-Z]|Mc[A-Z])') then vendor
4 when vendor like '''%' then vendor
5 when initcap(vendor) like '_''S%' then vendor
6 else replace(initcap(vendor),'''S','''s')
7 end ugh
8 from service_provider;
UGH
-------------------------------
Jones
Brown
Smith
McDonald
Johnson's
281. "Always code as if the person who
ends up maintaining your code is a
psychopathic killer who knows
where you live."
- source unknown
283. SQL> WITH
2 function my_initcap(p_string varchar2)
3 return varchar2 is
4 l_string varchar2(1000) := p_string;
5 begin
6 if regexp_like(l_string,'(Mac[A-Z]|Mc[A-Z])') then
7 null;
8 elsif l_string like '''%' then
...
17
18 return l_string;
19 end;
20 select my_initcap(vendor)
21 from service_provider;
MY_INITCAP(VENDOR)
-----------------------------------------
Jones
Brown
Smith
McDonald
O'Brien
Johnson's
285. SQL> WITH
2 function is_scottish(p_string varchar2) return boolean is
3 begin
4 return regexp_like(p_string,'(Mac[A-Z]|Mc[A-Z])');
5 end;
6 function my_initcap(p_string varchar2) return varchar2 is
7 l_string varchar2(1000) := p_string;
8 begin
9 if is_scottish(l_string) then
10 null;
11 elsif l_string like '''%' then
12 null;
13 else
14 l_string := initcap(l_string);
15 if l_string like '_''S%' then
16 null;
17 else
18 l_string := replace(l_string,'''S','''s');
19 end if;
20 end if;
21
22 return l_string;
23 end;
24 select my_initcap(surname)
25 from names;
26 /
288. SQL> WITH
2 function my_initcap(p_string varchar2) return varchar2 is
3 l_string varchar2(1000) := p_string;
function my_initcap(p_string varchar2) return varchar2 is
*
ERROR at line 2:
ORA-06553: PLS-103: Encountered the symbol "end-of-file" when
expecting one of the following:
. ( * @ % & = - + ; < / > at in is mod remainder not rem
<an exponent (**)> <> or != or ~= >= <= <> and or like like2
like4 likec between || multiset member submultiset
SQL> begin
2 if regexp_like(l_string,'(Mac[A-Z]|Mc[A-Z])') then
3 null;
4 elsif l_string like '''%' then
5 null;
6 else
7 l_string := initcap(l_string);
294. SQL> insert into CONTRACTS
2 WITH
3 function my_initcap(p_string varchar2)
4 return varchar2 is
5 l_string varchar2(1000) := p_string;
6 begin
...
20 end;
21 select my_initcap(vendor)
22 from service_provider;
23 /
WITH
*
ERROR at line 2:
ORA-32034: unsupported use of WITH clause
295. SQL> insert /*+ WITH_PLSQL */ into CONTRACTS
2 WITH
3 function my_initcap(p_string varchar2)
4 return varchar2 is
5 l_string varchar2(1000) := p_string;
6 begin
...
20 end;
21 select my_initcap(surname)
22 from names;
23 /
5 rows inserted.
297. SQL> with
2 function f return timestamp as
3 begin
4 return systimestamp;
5 end;
6 select f
7 from dual
8 connect by level <= 10;
9 /
F
----------------------------------------
05-JAN-14 08.09.43.969000000 PM
05-JAN-14 08.09.43.970000000 PM
05-JAN-14 08.09.43.970000000 PM
05-JAN-14 08.09.43.971000000 PM
...
301. SQL> with
2 function f return timestamp as
3 begin
4 return systimestamp;
5 end;
6 select ( select f from dual )
7 from dual
8 connect by level <= 10;
9 /
F
----------------------------------------
05-JAN-14 08.11.50.145000000 PM
05-JAN-14 08.11.50.145000000 PM
05-JAN-14 08.11.50.145000000 PM
05-JAN-14 08.11.50.145000000 PM
...
303. SQL> create or replace
2 function F return number is
3 begin
4 return 1;
5 end;
6 /
Function created.
304. SQL> select sum(f)
2 from
3 ( select level from dual
4 connect by level <= 1000 ),
5 ( select level from dual
6 connect by level <= 1000 )
7 ;
SUM(F)
----------
1000000
Elapsed: 00:00:02.04
305. SQL> with
2 function f1 return number is
3 begin
4 return 1;
5 end;
6 select sum(f1)
7 from
8 ( select level from dual
9 connect by level <= 1000 ),
10 ( select level from dual
11 connect by level <= 1000 )
12 /
SUM(F1)
----------
1000000
Elapsed: 00:00:00.52
327. SQL> select empno, ename, hiredate
2 from scott.emp
3 order by hiredate desc
4 fetch LAST 5 rows only;
fetch LAST 5 rows only
*
ERROR at line 4:
ORA-00905: missing keyword
327
328. SQL> select *
2 from (
3 select empno, ename, hiredate
4 from scott.emp
5 order by hiredate asc
6 fetch first 5 rows only
7 )
8 order by hiredate desc;
EMPNO ENAME HIREDATE
---------- ---------- ---------
7698 BLAKE 01-MAY-81
7566 JONES 02-APR-81
7521 WARD 22-FEB-81
7499 ALLEN 20-FEB-81
7369 SMITH 17-DEC-80 328
332. "Note that in real life,
you would use bind variables
instead of hard-coded literals"
332
- Tom Kyte,
Oracle Magazine, Sep 13
333. SQL> declare
2 l_num number := 5;
3 begin
4 for i in (
5 select empno, ename, hiredate
6 from scott.emp
7 order by hiredate desc
8 fetch first l_num rows only
9 )
10 loop
11 null;
12 end loop;
13 end;
14 /
declare
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 26618
Session ID: 25 Serial number: 53023
333
334. SQL> declare
2 l_num number := 5;
3 begin
4 for i in (
5 select empno, ename, hiredate
6 from scott.emp
7 order by hiredate desc
8 fetch first cast(l_num as number) rows only
9 )
10 loop
11 null;
12 end loop;
13 end;
14 /
PL/SQL procedure successfully completed.
334
fixed in 12.1.0.2
366. SQL> select acct, ceil(max(finished_losing-started_losing))
2 from account_txns
3 MATCH_RECOGNIZE
4 (
5 partition by acct
6 order by tstamp
7 measures
8 tstamp as tstamp, outcome as outcome,
9 first(still_losing.tstamp) started_losing,
10 got_lucky.tstamp finished_losing
11 one row per match
12 pattern ( still_losing* got_lucky )
13 define
14 still_losing as outcome = 'Lose'
15 and still_losing.outcome =
16 first(still_losing.outcome),
17 got_lucky as outcome = 'Win'
18 )
19 group by acct
20 order by 1,2;
366
375. SQL> desc ACCOUNTS
Name Null? Type
----------------------------- -------- ------------
ID NUMBER(8)
NAME VARCHAR2(30)
EMAIL_ADDRESS VARCHAR2(30)
SQL> select * from ACCOUNTS;
ID NAME EMAIL_ADDRESS
-------- -------------------- -------------------
1 Suzanne suzy_q@yahoo.com
2 John Smith john.smith@hotmail.com
...
377. SQL> conn scott/tiger
Connected.
SQL> select * from ACCOUNTS;
ID NAME EMAIL_ADDRESS
-------- -------------------- -------------------
1 Suzanne xxxx@yahoo.com
2 John Smith xxxx@hotmail.com
...
385. SQL> create table scott.emp2 as
2 select * from scott.emp;
Table created.
SQL> select *
2 from scott.emp e,
3 scott.emp2 e2,
4 scott.dept d
5 where e.deptno = d.deptno(+)
6 and e2.deptno = d.deptno(+)
7 and e.empno = e2.empno
8 /
where e.deptno = d.deptno(+)
*
ERROR at line 5:
ORA-01417: a table may be outer joined to at
most one other table
392. SQL> select e.empno, d.deptno, b.benefits
2 from scott.emp e,
3 ( select benefits
4 from scott.DEPT_BENEFITS d
5 where d.deptno = e.deptno
6 ) b
7 order by 1,3;
where d.deptno = e.deptno
*
ERROR at line 5:
ORA-00904: "E"."DEPTNO": invalid identifier
396. SQL> select e.empno, d.deptno, b.benefits
2 from scott.emp e,
3 ( select benefits
4 from scott.DEPT_BENEFITS d
5
where d.deptno = e.deptno
6 ) b
7
8 order by 1,3;
407. 407
SQL> alter user TO_BE_EDITIONED enable editions;
alter user TO_BE_EDITIONED enable editions
*
ERROR at line 1:
ORA-38819: user TO_BE_EDITIONED owns one or more
objects whose type is editionable and that have
noneditioned dependent objects