This document provides guidance on how to improve one's Oracle career and make the most of Oracle in 2010. It recommends focusing on becoming more proactive by regularly checking databases for potential issues before they occur, improving backup and recovery strategies through regular testing of backups, and leveraging the capabilities of Oracle Data Pump beyond basic data movement, such as for data masking, metadata management, and cloning of users and databases.
OOPS Concepts, Java Evolution, Class Object basic, Class Object Constructor overloading, Inheritance, Array and String, Final Abstract class and interfaces, Exceptions, Streams, GUI Applications, Applet Programming, Network Programming and Java Sockets, Multi Threading
It's easy to look at where you are and where you want to be and think, “I’ll never get there” and plateau with your current skill set.
Maybe you’re a developer who is looking to level up their career. Maybe you’re someone who wants to break into a development career for the first time.
Wherever you’re at, I want to teach you the same methods I use every single day to keep my skills sharp and to keep myself connected to interesting and rewarding projects and relationships.
OOPS Concepts, Java Evolution, Class Object basic, Class Object Constructor overloading, Inheritance, Array and String, Final Abstract class and interfaces, Exceptions, Streams, GUI Applications, Applet Programming, Network Programming and Java Sockets, Multi Threading
It's easy to look at where you are and where you want to be and think, “I’ll never get there” and plateau with your current skill set.
Maybe you’re a developer who is looking to level up their career. Maybe you’re someone who wants to break into a development career for the first time.
Wherever you’re at, I want to teach you the same methods I use every single day to keep my skills sharp and to keep myself connected to interesting and rewarding projects and relationships.
Metric Abuse: Frequently Misused Metrics in OracleSteve Karam
This is a presentation I created for RMOUG 2014 which I was sadly unable to attend. However, I wanted to share it with the Oracle community so that you can learn a bit about metrics that are frequently cited, frequently demonized, and frequently misused. In this deck we will go through the steps to diagnose issues and what NOT to blame as you go through the process.
The topics and concepts discussed here were originally formed in a blog post on the OracleAlchemist.com site: http://www.oraclealchemist.com/news/these-arent-the-metrics-youre-looking-for/
The Key to Effective Analytics: Fast-Returning QueriesEric Kavanagh
The best business analysts understand the value of having a "conversation" with their data. The idea is that they can pose queries, examine results, then quickly modify their questions to home in on a desired answer. This kind of iterative process creates a fluid environment that is highly conducive for identifying meaningful patterns in data. Register for this episode of Hot Technologies to hear Bloor Group Chief Analyst Dr. Robin Bloor and Data Scientist Dez Blanchfield as they outline why fluid analytics should be the norm and which hurdles still stand in the way. They'll be briefed by Bullett Manale of IDERA who will demonstrate his company's diagnostic platform for analytics. He'll provide context, and also deliver a demo that shows real-world solutions that enable iterative analytics.
Metric Abuse: Frequently Misused Metrics in OracleSteve Karam
This is a presentation I created for RMOUG 2014 which I was sadly unable to attend. However, I wanted to share it with the Oracle community so that you can learn a bit about metrics that are frequently cited, frequently demonized, and frequently misused. In this deck we will go through the steps to diagnose issues and what NOT to blame as you go through the process.
The topics and concepts discussed here were originally formed in a blog post on the OracleAlchemist.com site: http://www.oraclealchemist.com/news/these-arent-the-metrics-youre-looking-for/
The Key to Effective Analytics: Fast-Returning QueriesEric Kavanagh
The best business analysts understand the value of having a "conversation" with their data. The idea is that they can pose queries, examine results, then quickly modify their questions to home in on a desired answer. This kind of iterative process creates a fluid environment that is highly conducive for identifying meaningful patterns in data. Register for this episode of Hot Technologies to hear Bloor Group Chief Analyst Dr. Robin Bloor and Data Scientist Dez Blanchfield as they outline why fluid analytics should be the norm and which hurdles still stand in the way. They'll be briefed by Bullett Manale of IDERA who will demonstrate his company's diagnostic platform for analytics. He'll provide context, and also deliver a demo that shows real-world solutions that enable iterative analytics.
Why Use Oracle VM for Oracle Databases? Revera PresentationFrancisco Alvarez
Presentation about results of internal benchmarks done by Revera in NZ regarding the performance of an Oracle Database runing in bare metal vs virtualized environments.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
1. How to Improve your Oracle career and make 2010
magic with Oracle
HOW TO IMPROVE YOUR
ORACLE CAREER, AND
MAKE MAGIC WITH ORACLE
By Francisco Munoz Alvarez – August 2010
Page 1 of 25
2. How to Improve your Oracle career and make 2010
magic with Oracle
Introduction
The main question about the DBA job I hear all the time is: How can I became a
successful DBA?
Most of the people that I talk to who have difficulties starting out in their DBA career
really have an issue trying to absorb the mountainous volumes of information that a DBA
needs to know this days. The evolution of the DBA role in the past few years was
amazing, from a role that was basically responsible for administrate one or two small
Oracle Databases and interact very closely with some System Administrators to become a
super role (most of the time absorbing System and Network administrator
responsibilities) some modern DBA responsibilities could be to manage:
Several Oracle Databases and Data Warehouses
High Availability environments like RAC and Standby Databases
Other type of RDBMS (MySQL, SQL Server, DB2, etc)
Support servers (Application and DB)
Security and Network stability
Storages and Clusters
Mentor other DBAs
Backup and Recovery Strategy
Handle User problems (including functional side of applications)
Review SQL and PL/SQL codes
Control and execute promotions to production environments
As per example, here are some common duties for a DBA in today’s world:
•
•
•
•
•
•
•
•
•
•
•
•
Monitor database instances on a daily basis to ensure availability.
Resolve unavailability issues.
Collect system statistics and performance data for trending and configuration
analysis.
Configure and tune DB instances for optimal performance under application
specific guidelines.
Analyze and administer DB security. Control and monitor user access. Audit DB
usage when necessary.
Monitor backup procedures and Provide recovery when needed.
Develop and test backup and recovery procedures.
Upgrade RDBMS software and apply patches when needed.
Upgrade or migrate database instances as necessary.
Support application developers with any and all dB related activities.
Keep up with DB trends & technologies.
Use new technologies when applicable.
By Francisco Munoz Alvarez – August 2010
Page 2 of 25
3. How to Improve your Oracle career and make 2010
magic with Oracle
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Install, test, and evaluate new Oracle related products.
Perform storage and physical design.
Balance design issues to achieve optimal performance.
Create, configure and design new DB instances.
Diagnose, troubleshoot and resolve any DB related problems.
Work with Oracle Support if necessary to bring problems to a successful
resolution.
Ensure that Oracle networking software is configured and running properly.
Work with System Administrators (UNIX & NT) to ensure Oracle related matters
are handled properly, or in some cases, do yourself it.
Train new DBA’s
Create, Manage and Monitor Standby Databases
Understand your user Applications and needs.
Configure and Manage Database and Application Servers
Manage and configure Clusters , DWs and AS
Configure and Manage different type of RDBMS
XML, Java, PHP, HTML, Linux, Unix, Windows Scripting
Create any necessary scripts for effective and occasionally periodic dB
maintenance activities.
Capacity Planning /Hardware Planning
Like you can easily see, the DBA job is not easy, and each professional need to be
capable to be multitasks and manages a lot of responsibilities and stress.
In this paper, we will see several examples on how to improve your DBA career and how
to become a real success DBA.
By Francisco Munoz Alvarez – August 2010
Page 3 of 25
4. How to Improve your Oracle career and make 2010
magic with Oracle
First, Learn to change yourself
If you want to become a successful professional, first you need to educate yourself to be
successful! Your future success depends only in your attitude today. You control your life,
nobody else!
Figure 1
Becoming a successful DBA is a combination of:
Your professional attitude, always think positive and always look for solutions
instead to kill yourself in a cup of water.
Learn how to research, before do something, investigate, search in the internet,
read manuals. You need to show that you know how to do a properly research and
look for solutions for your problems yourself.
Innovate, don’t wait for others to do your job, or because the other DBAs don’t
care about the business you will do the same. Learn to innovate, learn to become a
leader and make everyone follow your example with results. Think Different!
Learn to communicate properly; the best way to learn how to communicate
effectively is learning to listen first. Listen, than analyze the context expressed
and only than communicate an answer in a professional and honest way to your
peers. Always treat everyone the same way you would like to be treated.
Albert Einstein said one time:
“If I had one hour to save the world, I would spend fifty-five minutes defining the
problem and only five minutes finding the solution”
By Francisco Munoz Alvarez – August 2010
Page 4 of 25
5. How to Improve your Oracle career and make 2010
magic with Oracle
Learning to be Proactive
Why check the problems only when they are critical, or when is too late and the database
is down, or the users are screaming?
Being proactive is the best approach to keep your DB healthy and to show your company,
or your clients that you really care about them.
Many DBA’s expend most of their time being firefighters only, fixing problems and
working on user’s requests all the time. They don’t do any proactive work; this mentality
only will cause an overload of work to them, thousands of dollars of overtime, several
hours without access to the data to the users, poor performance to the applications, and
what is worse of all, several unhappy users thinking that you doesn’t have the knowledge
needed to take care of their data.
Let’s mention a small example, you have the archive log area alert set to fire when it is
95% full, and this happens in the middle of the night, some DBA’s will take seriously the
alert and solve the problem quickly, others will wait until the next day to take care of it
because they are tired, or sleeping, or they are in a place without internet access at the
moment the alert arrived. Will be a lot easier if they set a proactive alert to be fire when
75% or 85%, or even better, take a look in the general health status of the DB before
leave their work shift, to try to detect and solve any possible problem before be a real
problem and be awake in the middle of the night or during the weekend (Remember how
important is your personal and family time). I’ll always recommend to DBA’s to run 2
checklists daily, one in the start of their shift and other before they leave their shift.
I know several DBA’s that complain all the time that they got so many calls when they
are on call, but they don’t do anything to solve the root problem, they only expend their
time to solve the symptoms.
In my blog you can find an Oracle checklist script that will help to make your life a little
easier (This is not my complete script, but will be a good start for you). This script is a
compilation of several normal checklists and you can setup them with your own
requirement and thresholds and always remember to have a baseline to compare. This
script will not only help you to detect future or current problems, but also will help you to
detect possible tuning requirement.
By Francisco Munoz Alvarez – August 2010
Page 5 of 25
6. How to Improve your Oracle career and make 2010
magic with Oracle
Here is an example of the script first phase outcome:
– ———————————————————————– –
– Oracle Instance Information
– ———————————————————————– –
Cpu_Count
4
Instance_Name
prod
Status
OPEN
Version
11.1.0.7.0
Database Space (Mb)
36604
Nb. Datafiles
43
|
|
|
|
|
|
Host_Name
Database_Status
Startup_Time
Instance_Role
SGA (Mb)
Nb. Tempfiles
OLIVER
ACTIVE
10-01-2009 19:50
PRIMARY_INSTANCE
511
1
Archive destination LOCATION=E:oracleoradataprodarchive
Database log mode ARCHIVELOG
Background Dump Dest d:oraclediagrdbmsprodprodtrace
Spfile D:ORACLEPRODUCT11.1PRODDATABASESPFILEPROD.ORA
Redo size (Kb) 102400
– ———————————————————————– –
– Instance CheckList –
– ———————————————————————– –
Instance Status
– ———————————————————————– –
– Performance Memory CheckList –
– ———————————————————————– –
Total Sessions < 700
Active sessions number <15
Data Buffer Hit Ratio > 80
L.Buffer Reload Pin Ratio > 99
Row Cache Miss Ratio < 0.015
Dict.Buffer Hit Ratio > 80
Log Buffer Waits = 0
Log Buffer Retries < 0.0010
Switch number (Daily Avg) < 5
Jobs Broken = 0
Shared_Pool Failure = 0
OK
OK
OK
OK
OK
NO
OK
NO
OK
OK
OK
OK
|
-
–
OK
OK
OK
OK
OK
NO
OK
NO
NO
OK
NO
NO
NO
-
|
|
|
0
0
147
0
5
2
0
552
26
138
– ———————————————————————– –
– Datagard CheckList
– ———————————————————————– –
Datagard Errors = 0
Datagard Gap = 0
Archives not Aplied < 5
OK
19
9
97
99
1.351
99
110
0
1
0
0
– ———————————————————————– –
– Storage CheckList
– ———————————————————————– –
Dba_Tablespaces Status
V$Datafile Status
V$Recover_File
Tablespace in Backup Mode = 0
Tablespace < 95%
Objects Invalid = 0
Indexes unusable = 0
Trigger Disabled = 0
Constraint Disabled = 0
Objects close max extents = 0
Objects can not extent = 0
User Objects on Systems = 0
FK Without Index = 0
Listener Status
V$Log Status
V$Tempfile Status
V$Recovery_Log
–
OK - 0
OK - 0
OK - 2
By Francisco Munoz Alvarez – August 2010
Page 6 of 25
OK
OK
OK
7. How to Improve your Oracle career and make 2010
magic with Oracle
–
–
–
-
———————————————————————- –
Installed options :
———————————————————————- –
Objects option
Connection multiplexing option
Connection pooling option
Database queuing option
Incremental backup and recovery option
Instead-of triggers option
Parallel load option
Proxy authentication/authorization option
Plan Stability option
Coalesce Index option
Transparent Application Failover option
Sample Scan option
Java option
OLAP Window Functions option
You also have several tools available in the market that can help you to monitor and setup
your DB alerts, and help you with the proactive monitoring like: Grid Control, Enterprise
Manager, Insider (FourthElephant), Spotlight (Quest) or if you prefer, your own scripts.
The idea is to use them always on a proactive way, never reactive.
Let’s change our mentality, let stop being a firefighter and start to be a real hero!
Backup And Recovery
It’s time to be “Proactive with Backup & Recovery”, always when I arrive on a new
client I ask the DBA on charge the following questions:
Do you have your recovery strategy documented step by step?
Are you 100% sure that your tape backups are usable?
Do you know exactly how long a recovery on your production environment will
take if necessary?
And almost 90% of the time the answers will be:
No!
I not sure, but I think so!
No idea, probably…!
You will be on shock to know how many times I’m call to support a DBA to try to
recover a Database because the most current tape backup is unusable!
Backup & Recovery are a very important (crucial) part of a DBA role, as a DBA I’ll
never be stressed enough to repeat over and over what in my opinion is the most
important rule for a DBA:
By Francisco Munoz Alvarez – August 2010
Page 7 of 25
8. How to Improve your Oracle career and make 2010
magic with Oracle
“The most important rule with respect to data is to never put yourself into an
unrecoverable situation, never!”
You know, because bad stuff happens….
…When you less expect, and due to this, I’ll always recommend a DBA to perform a
proactive approach to his/her Database Backup and Recovery strategy.
The main idea is:
Randomly choose a backup tape and recovery it on a test machine (It can be a
virtual one).
Take this opportunity to document all the recover process.
Review the entire process ant try to improve it!
Repeat this exercise every month and try to involve other DBAs in the process!
This easy process will allow you to:
Test your Tape backups and see if they are being backup correctly.
By Francisco Munoz Alvarez – August 2010
Page 8 of 25
9. How to Improve your Oracle career and make 2010
magic with Oracle
Check and improve your recovery knowledge and strategy.
Document all your recovery process that could be used for any other DBA in the
company in case you are not available in the recovery situation.
Detect any error on your backup & recovery strategy.
Know your recovery time. Next time your manager asks you” Do you know how
long a recovery will take? You will know the exact answer.
Have an opportunity to review your process and try to make it more efficient.
Like you can see, this is an easy proactive exercise that will allow you and your company
to be prepared in case of a disaster and recovery situations occurs, and you know when
this always happens….
Making Magic with Datapump
A lot of people don’t know several powerful functionalities that we have available when
using Data Pump (expdp/impdp), most of the people only use these tools to export and
import data (in other words, only to move data), and never notice that it can be used for
example to help us to do:
Data Masking
Build a Metadata Repository
Create a version control
Clone Users (Create a new user using and existent user as a template)
Create smaller copies of production
Create your database in a different file structure
Move all objects from one tablespace to another
Move a object to a different schema (A simple example, change a table owner)
Now let’s see how each functionality I mentioned above can be used at real life.
1) Data Masking
By Francisco Munoz Alvarez – August 2010
Page 9 of 25
10. How to Improve your Oracle career and make 2010
magic with Oracle
In many organizations (I hope so) the DBA’s had the obligation for a security and
compliance purpose to mask all sensible information that leaves the production
environment to as an example to refresh or create a QA/Test or Dev environment. To
help us to address this kind of requirements we could easily use the Enterprise Manager
Data Masking Pack (remember it is an extra pack, and consequently you need to pay
extra to use it), or as a different option, use the “remap_data” parameter available in Data
Pump to help you with this requirement(**this is a new functionality at 11g)!
Let’s use the classic SSN (Social Security Number) example to illustrate how it works:
a) First let’s create the table for the test and load some data on it.
SQL>
2
3
4
5
6*
SQL>
CREATE TABLE HR.EMPLOYEE
( EMP_ID
NUMBER(10) NOT NULL,
EMP_NAME VARCHAR2(30),
EMP_SSN VARCHAR2(9),
EMP_DOB DATE
)
/
insert into hr.employee values (101,‘Francisco Munoz’,123456789,’30-DEC-73′);
insert into hr.employee values (102,‘Horacio Miranda’,234567890,’17-JUL-76′);
insert into hr.employee values (103,‘Evelyn Aghemio’,659812831,’02-OCT-79′);
b) The second step will be to create the remap function:
SQL>
2
3
4
5
SQL>
2
3
4
5
6
7
8
9
10
11
create or replace package pkg_masking
as
function mask_ssn (p_in varchar2) return varchar2;
end;
/
create or replace package body pkg_masking
as
function mask_ssn (p_in varchar2)
return varchar2
is
begin
return lpad (
round(dbms_random.value (001000000,999999999)),9,0);
end;
end;
/
This function will take a varchar argument and returns a 9 char. We will use this function
to mask all SSN information inside our employee table.
By Francisco Munoz Alvarez – August 2010
Page 10 of 25
11. How to Improve your Oracle career and make 2010
magic with Oracle
SQL> desc employee
Name
Null?
----------------------------------------- -------EMP_ID
NOT NULL
EMP_NAME
EMP_SSN
EMP_DOB
Type
---------------------------NUMBER(10)
VARCHAR2(30)
VARCHAR2(9)
DATE
SQL> select * from employee;
EMP_ID
---------101
102
103
EMP_NAME
-----------------------------Francisco Munoz
Horacio Miranda
Evelyn Aghemio
EMP_SSN
--------123456789
234567890
345678901
EMP_DOB
--------30-DEC-73
17-JUL-76
02-OCT-79
For this example, all you want to mask is the column EMP_SSN, which contains the SSN
of each employee.
b) Now we are going to export the table employees using the expdp tool, and while
exporting, we will use the parameter “remap_data” to mask the data for us in the dump
file using the function we previously created.
$expdp hr/hr tables=hr.employee dumpfile=mask_ssn.dmp directory=datapump
remap_data=hr.employee.emp_ssn:pkg_masking.mask_ssn
Note: By defect the “remap_data” parameter will use the user doing the export as the
owner of the remap function, if the schema owner of the function is different you will
need to use the following commad:
$ expdp hr/hr tables=hr.employee dumpfile=mask_ssn.dmp directory=datapump
remap_data=hr.employee.emp_ssn:owner.pkg_masking.mask_ssn
c) Now all we need to do is to import the mask_ssn.dmp in our QA/Test or Dev Database
and it will magically have the new values there.
SQL> select * from employee;
EMP_ID
---------101
102
103
EMP_NAME
-----------------------------Francisco Munoz
Horacio Miranda
Evelyn Aghemio
EMP_SSN
--------108035616
324184688
638127075
EMP_DOB
--------30-DEC-73
17-JUL-76
02-OCT-79
Note: you can use the “remap_data” option in the impdp tool if you have a normal export
done before , also remember that you can use it to mask almost everything, but please
take in consideration your application requirements and data integrity requirements when
using it!
For more information regarding this option and to see another examples, please refer to
this paper:
By Francisco Munoz Alvarez – August 2010
Page 11 of 25
12. How to Improve your Oracle career and make 2010
magic with Oracle
http://www.oracle.com/technology/products/database/utilities/pdf/datapump11g2009_tra
nsform.pdf
2) Metadata Repository and Version Control
As a DBA, I’m always looking for proactive ways to allow me to be prepared in case of a
disaster strike or if an emergency release rollback is required (I love to use always the
“What if” methodology), and due to these reasons, have a metadata repository and
version control of it is always useful .
But how can I easily create it? Easy, first do a full backup of your database using
Datapump.
$ expdp user/password content=metadata_only full=y directory=datapump
dumpfile=metadata_24112010.dmp
Note: If you want to create a repository only for objects like procedures, packages,
triggers,
…
,
all
you
need
to
do
is
add
the
parameter
include=<procedures,packages,triggers,…> to your expdp command, I usually include in
the dump file name the date of the dump for reference purpose and best practice.
Then use the impdp tool to create the SQL file that will allow you to create all objects in
your Database. It will be something like this:
$ impdp user/password directory=datapump dumpfile= metadata_24112010.dmp
sqlfile=metadata_24112010.sql
This simple technique will allow you to create your metadata repository easily and also
keep a versioning of your database objects as an extra, also if you create your repository
(DB) and you want to refresh an object definition (as example let use the table emp from
schema “scott”), all you will need to do is an export of the new table definition from your
source database and then import it on your target database (your repository) as show
bellow:
$ expdp user/password content=metadata_only tables=scott.emp directory=datapump dumpfile=
refresh_of_table_emp_24112010.dmp
$ impdp user/password table_exists_action=replace directory=datapump dumpfile=
refresh_of_table_name_24112010.dmp
3) Cloning a User
By Francisco Munoz Alvarez – August 2010
Page 12 of 25
13. How to Improve your Oracle career and make 2010
magic with Oracle
In the past when a DBA had the need to create a new user with the same structure (All
objects, tablespaces quota, synonyms, grants, system privileges, etc) was a very painful
experience, now all can be done very easily using Data Pump, let use as an example that
you want to create the user ”Z” exactly like the user “A”, to achieve this goal all you will
need to do is first export the schema “A” definition and then import it again saying to the
Data Pump to change the schema “A” for the new schema named “Z” using the
“remap_schema” parameter available with impdp.
$ expdp user/password schemas=A content=metadata_only directory=datapump dumpfile=
A_24112010.dmp
$ impdp user/password remap_schema=A:Z directory=datapump dumpfile= A_24112010.dmp
And your new user Z is now created like your existing user A , that easy!
4) Creating smaller copies of production
That is a very common task for a DBA, you are always having a task to create a copy of
your Database (for development or test purpose) but your destination server don’t have
enough space to create a full copy of it! This can be easily solved with Data Pump, for
this example, let say that you only have space for 70% of your production database, now
to know how to proceed, we need to decide if the copy will contain metadata only (no
data/rows) or if it will include the data also. Let’s see how to do each way:
a) Metadata Only
First do a full export of your source database.
$ expdp user/password content=metadata_only full=y directory=datapump
dumpfile=metadata_24112010.dmp
Then, let’s import the metadata and tell the Data Pump to reduce the size of extents to
70%, you can do it using the parameter “transform” available with “impdp”, it represent
the percentage multiplier that will be used to alter extent allocations and datafiles size.
$ impdp user/password transform=pctspace:70 directory=datapump
dumpfile=metadata_24112010.dmp
Let’s do a test and see if this is really true, first let export any table of my test database
(metadata only) and generate the “sql” script to see the normal size of it.
$expdp user/password content=metadata_only tables=user.x_integration_log_det
directory=datapump dumpfile=example_24112010.dmp
$impdp user/password content=metadata_only directory=datapump
dumpfile=example_24112010.dmp sqlfile=x_24112010.sql
CREATE TABLE "USER“.”X_INTEGRATION_LOG_DET“
( "BATCH_NO” NUMBER(9,0),
"SEQUENCE#” NUMBER(9,0),
"FILENAME” VARCHAR2(200 BYTE),
By Francisco Munoz Alvarez – August 2010
Page 13 of 25
14. How to Improve your Oracle career and make 2010
magic with Oracle
"ERROR_MESSAGE” VARCHAR2(2000 BYTE),
"NO_OF_RECORDS” NUMBER,
"STATUS” VARCHAR2(2000 BYTE)
) PCTFREE 10 PCTUSED 0 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "CUSTSERV_LOG_DET_DATA” ;
Above is the SQL code generated by Data Pump, you can see that the table is going to be
created using 65536 for the initial extent and 1048576 for the next extent, now let’s
generate it again but using the transform parameter to reduce the size of it to 70% of
original size.
$impdp user/password transform=pctspace:70 content=metadata_only directory=datapump
dumpfile=example_24112010.dmp sqlfile=x_24112010.sql
CREATE TABLE "USER“.”X_INTEGRATION_LOG_DET“
( "BATCH_NO” NUMBER(9,0),
"SEQUENCE#” NUMBER(9,0),
"FILENAME” VARCHAR2(200 BYTE),
"ERROR_MESSAGE” VARCHAR2(2000 BYTE),
"NO_OF_RECORDS” NUMBER,
"STATUS” VARCHAR2(2000 BYTE)
) PCTFREE 10 PCTUSED 0 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 45875 NEXT 734003 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "CUSTSERV_LOG_DET_DATA” ;
Above is the SQL code generated by Data Pump, and you can see that the table is now
going to be created using 45875 for the initial extent and 734003 for the next extent,
clearly reduced 30% of the original size, in other words, it works.
Please refer to Oracle documentation for more ways to use the transform parameter, you
will not regret
b) Metadata and data
First does a full export of your source database using the export parameter “sample”, this
parameter specify a percentage of the data rows to be sampled and unload from your
source database, in this case let’s use 70%.
$ expdp user/password sample=70 full=y directory=datapump dumpfile=expdp_70_24112010.dmp
Then, all you need to do as the example before is to import it telling the Data Pump to
reduce the size of extents to 70%, and that’s it!
$ impdp user/password transform=pctspace:70 directory=datapump
dumpfile=expdp_70_24112010.dmp
5) Creating your database in a different file structure
By Francisco Munoz Alvarez – August 2010
Page 14 of 25
15. How to Improve your Oracle career and make 2010
magic with Oracle
This is very easy to archive, all you need to do is use the parameter “remap_datafile” on
your import command as the example bellow:
$ impdp user/password directory=datapump dumpfile=example_24112010.dmp
remap_datafile=’/u01/app/oracle/oradata/datafile_01.dbf’:’/u01/datafile_01.dbf’
6) Moving all objects from one tablespace to another
This is very easy to do it, as the previous example, all you will need to do is use the
parameter “remap_tablespace” on your import command as the example bellow:
$ impdp user/password directory=datapump dumpfile=example_24112010.dmp
remap_tablespace=OLD_TBS:NEW_TBS
7) Moving a object to a different schema
All you will need to do is use the parameter remap_schema as the example bellow when
importing it.
$ expdp user/password tables=user.table_name directory=datapump
dumpfile=table_name_24112010.dmp
$ impdp user/password directory=datapump dumpfile=table_name_24112010.dmp
remap_schema=old_schema:new_schema
Always remember, Data Pump is your good friend, and you will be amazed with all you
can do with it to make your life as a DBA easy.
How to Make a cool magic trick
Who never had a problem with a SQL that is killing your Database Performance and you
can’t fix it because it’s running from an external closed application that you or your
developers can’t touch?
Since Oracle version 10 this is a problem of the past, now you can easy solve this kind of
problem using the DBMS_ADVANCED_REWRITE package, which allow you to
transform/customize queries on the fly, changing for example one query with bad explain
plan for another one with a good explain plan.
Before you become too excited, please remember the following restrictions:
By Francisco Munoz Alvarez – August 2010
Page 15 of 25
16. How to Improve your Oracle career and make 2010
magic with Oracle
It does not work with bind variables.(Alternative solution at Metalink Doc ID.
392214.1)
Only works for the SELECT statement.
Does not work when the base table is modified through DML.
To see how it works, first we will need to grant execute privileges on the package to our
user called test and allow it to create materialized views.
CONN sys/password AS SYSDBA
GRANT EXECUTE ON DBMS_ADVANCED_REWRITE TO test;
GRANT CREATE MATERIALIZED VIEW TO test;
Now let’s create and populate our objects for test purpose:
CONN test/test
CREATE TABLE students (
student_id
NUMBER(10),
student_name
VARCHAR2(45),
student_status
VARCHAR2(1),
student_year
number(2),
student_address
varchar2(45),
student_city
varchar2(16),
student_zip
number(6),
student_social_security number(10))
/
ALTER TABLE students
ADD CONSTRAINT pk_student PRIMARY KEY (student_id)
USING INDEX
PCTFREE 0;
BEGIN
INSERT INTO STUDENTS VALUES (1,'PAUL COOK','Y',9,'5 Main Road','Auckland',2031,100101000);
INSERT INTO STUDENTS VALUES (2,'HORACIO MIRANDA','Y',8,'15 Main Road','Auckland',2031,100101001);
INSERT INTO STUDENTS VALUES (3,'SCOTT PEDERSEN','Y',7,'13 Main Road','Auckland',2031,100101002);
INSERT INTO STUDENTS VALUES (5,'SETH PICKERING','Y',9,'12 Main Road','Auckland',2031,100101003);
INSERT INTO STUDENTS VALUES (6,'FRANCISCO ALVAREZ','Y',11,'11 Main Road','Auckland',2031,100101004);
INSERT INTO STUDENTS VALUES (7,'ALTMAAR VISSER','Y',9,'16 Main Road','Auckland',2031,100101005);
INSERT INTO STUDENTS VALUES (9,'REYNALDO OCFEMIA','Y',6,'25 Main Road','Auckland',2031,100101006);
INSERT INTO STUDENTS VALUES (15,'CAMERON PITCHES','Y',12,'31 Main Road','Auckland',2031,100101007);
INSERT INTO STUDENTS VALUES (18,'MONIQUE GENNIP','Y',8,'71 Main Road','Auckland',2031,100101008);
INSERT INTO STUDENTS VALUES (99,'TERRENCE LO','Y',6,'17 Main Road','Auckland',2031,100101009);
INSERT INTO STUDENTS VALUES (100,'KIM FONG','Y',11,'16 Main Road','Auckland',2031,100101010);
INSERT INTO STUDENTS VALUES (103,'CHRIS OPPERMAN','Y',12,'7 Main Road','Auckland',2031,100101011);
INSERT INTO STUDENTS VALUES (104,'SCOTT TIGER','Y',6,'62 Main Road','Auckland',2031,100101012);
INSERT INTO STUDENTS VALUES (105,'EVELYN AGHEMIO','Y',11,'32 Main Road','Auckland',2031,100101013);
INSERT INTO STUDENTS VALUES (106,'TOMAS MUNOZ','Y',11,'18 Main Road','Auckland',2031,100101014);
INSERT INTO STUDENTS VALUES (107,'GONZALO TORRES','Y',10,'14 Principal
Road','Auckland',2031,100101015);
INSERT INTO STUDENTS VALUES (108,'JOHN KEY','Y',10,'12 Principal Road','Auckland',2031,100101016);
INSERT INTO STUDENTS VALUES (109,'JOHN A','Y',7,'21 Principal Road','Auckland',2031,100101017);
INSERT INTO STUDENTS VALUES (111,'JOHN B','Y',9,'121 Principal Road','Auckland',2031,100101018);
INSERT INTO STUDENTS VALUES (112,'JOHN C','Y',8,'321 Principal Road','Auckland',2031,100101019);
INSERT INTO STUDENTS VALUES (113,'JOHN D','Y',6,'35 Principal Road','Auckland',2031,100101020);
INSERT INTO STUDENTS VALUES (114,'JOHN E','Y',12,'41 Principal Road','Auckland',2031,100101021);
INSERT INTO STUDENTS VALUES (116,'JOHN F','Y',8,'161 Principal Road','Auckland',2031,100101022);
INSERT INTO STUDENTS VALUES (10,'JOHN G','Y',7,'171 Principal Road','Auckland',2031,100101023);
INSERT INTO STUDENTS VALUES (311,'JOHN H','Y',11,'353 Principal Road','Auckland',2031,100101024);
INSERT INTO STUDENTS VALUES (312,'JOHN I','Y',7,'351 Principal Road','Auckland',2031,100101025);
INSERT INTO STUDENTS VALUES (319,'JOHN K','Y',9,'352 Principal Road','Auckland',2031,100101026);
INSERT INTO STUDENTS VALUES (322,'JOHN L','Y',6,'353 Principal Road','Auckland',2031,100101027);
INSERT INTO STUDENTS VALUES (333,'JOHN M','Y',11,'354 Principal Road','Auckland',2031,100101028);
INSERT INTO STUDENTS VALUES (343,'JOHN N','Y',6,'355 Principal Road','Auckland',2031,100101029);
INSERT INTO STUDENTS VALUES (344,'JOHN O','Y',7,'356 Principal Road','Auckland',2031,100101030);
INSERT INTO STUDENTS VALUES (345,'JOHN P','Y',8,'357 Principal Road','Auckland',2031,100101031);
By Francisco Munoz Alvarez – August 2010
Page 16 of 25
17. How to Improve your Oracle career and make 2010
magic with Oracle
INSERT INTO STUDENTS VALUES (346,'JOHN Q','Y',9,'358 Principal Road','Auckland',2031,100101032);
INSERT INTO STUDENTS VALUES (347,'JOHN R','Y',10,'359 Principal Road','Auckland',2031,100101033);
INSERT INTO STUDENTS VALUES (350,'JOHN S','Y',11,'360 Principal Road','Auckland',2031,100101034);
INSERT INTO STUDENTS VALUES (530,'JOHN T','Y',12,'361 Principal Road','Auckland',2031,100101035);
INSERT INTO STUDENTS VALUES (531,'JOHN U','Y',13,'362 Principal Road','Auckland',2031,100101036);
INSERT INTO STUDENTS VALUES (533,'JOHN V','N',6,'35 Principal Road','Auckland',2031,100101037);
INSERT INTO STUDENTS VALUES (534,'JOHN X','N',8,'13 Principal Road','Auckland',2031,100101038);
INSERT INTO STUDENTS VALUES (535,'JOHN Z','N',7,'135 Principal Road','Auckland',2031,100101039);
INSERT INTO STUDENTS VALUES (536,'JOHN Y','N',11,'435 Principal Road','Auckland',2031,100101040);
INSERT INTO STUDENTS VALUES (537,'JOHN W','Y',8,'635 Principal Road','Auckland',2031,100101041);
INSERT INTO STUDENTS VALUES (539,'ARTUR JOHNES','Y',6,'22 Secondary
Road','Auckland',2031,100101042);
INSERT INTO STUDENTS VALUES (540,'KING PANTHER','Y',7,'22 Secondary
Road','Auckland',2031,100101043);
INSERT INTO STUDENTS VALUES (541,'PINK PANTHER','Y',8,'22 Secondary
Road','Auckland',2031,100101044);
INSERT INTO STUDENTS VALUES (542,'HAROLD ROBINS','Y',9,'221 Secondary
Road','Auckland',2031,100101045);
INSERT INTO STUDENTS VALUES (543,'CHRIS BONES','Y',8,'222 Secondary
Road','Auckland',2031,100101046);
INSERT INTO STUDENTS VALUES (545,'TIM TOM','Y',9,'223 Secondary Road','Auckland',2031,100101047);
INSERT INTO STUDENTS VALUES (546,'TIM JONES','Y',10,'223 Secondary Road','Auckland',2031,100101048);
INSERT INTO STUDENTS VALUES (547,'MICHAEL JONES','Y',11,'224 Secondary
Road','Auckland',2031,100101049);
INSERT INTO STUDENTS VALUES (548,'ANN SMITH','Y',12,'225 Secondary Road','Auckland',2031,100101050);
INSERT INTO STUDENTS VALUES (549,'JOHN SMITH','Y',13,'226 Secondary
Road','Auckland',2031,100101051);
INSERT INTO STUDENTS VALUES (551,'PAUL STONE','Y',6,'227 Secondary Road','Auckland',2031,100101052);
INSERT INTO STUDENTS VALUES (552,'CARL SMITH','Y',7,'228 Secondary Road','Auckland',2031,100101053);
INSERT INTO STUDENTS VALUES (553,'TEST','Y',8,'229 Secondary Road','Auckland',2031,100101054);
COMMIT;
END;
/
CREATE TABLE grades (
student_id
NUMBER(10),
grade
NUMBER(6,2),
grade_subject
VARCHAR2(4),
grade_date
DATE,
grade_note
VARCHAR2(60))
/
ALTER TABLE grades
ADD CONSTRAINT pk_GRADES PRIMARY KEY (student_id, grade, grade_subject,grade_date)
USING INDEX
PCTFREE 0;
ALTER TABLE grades
ADD CONSTRAINT fk_students
FOREIGN KEY (student_id)
REFERENCES students(student_id);
BEGIN
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INSERT
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
INTO
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
(553,100.0,'ENGL',sysdate,null);
(552,95.0,'ENGL',sysdate,null);
(551,87.0,'ENGL',sysdate,null);
(549,87.5,'ENGL',sysdate,null);
(548,90.0,'ENGL',sysdate,null);
(547,64.7,'ENGL',sysdate,null);
(546,85.0,'ENGL',sysdate,null);
(545,88.0,'ENGL',sysdate,null);
(543,98.0,'ENGL',sysdate,null);
(542,95.0,'ENGL',sysdate,null);
(541,94.0,'ENGL',sysdate,null);
(540,94.0,'ENGL',sysdate,null);
(539,95.0,'ENGL',sysdate,null);
(1,88.0,'ENGL',sysdate,null);
(2,98.0,'ENGL',sysdate,null);
(3,98.7,'ENGL',sysdate,null);
(5,96.0,'ENGL',sysdate,null);
(6,97.0,'ENGL',sysdate,null);
(7,90.0,'ENGL',sysdate,null);
By Francisco Munoz Alvarez – August 2010
Page 17 of 25
18. How to Improve your Oracle career and make 2010
magic with Oracle
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
COMMIT;
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
(9,91.0,'ENGL',sysdate,null);
(15,92.0,'ENGL',sysdate,null);
(18,93.0,'ENGL',sysdate,null);
(99,94.0,'ENGL',sysdate,null);
(100,95.0,'ENGL',sysdate,null);
(533,98.0,'ENGL',sysdate,null);
(534,100.0,'ENGL',sysdate,null);
(535,100.0,'ENGL',sysdate,null);
(536,99.0,'ENGL',sysdate,null);
(537,88.0,'ENGL',sysdate,null);
(530,88.0,'ENGL',sysdate,null);
(531,88.0,'ENGL',sysdate,null);
(103,67.0,'ENGL',sysdate,null);
(104,56.0,'ENGL',sysdate,null);
(105,93.0,'ENGL',sysdate,null);
(106,88.0,'ENGL',sysdate,null);
(107,72.0,'ENGL',sysdate,null);
(108,71.0,'ENGL',sysdate,null);
(109,68.0,'ENGL',sysdate,null);
(111,77.0,'ENGL',sysdate,null);
(112,87.0,'ENGL',sysdate,null);
(113,65.5,'ENGL',sysdate,null);
(114,34.0,'ENGL',sysdate,null);
(116,91.0,'ENGL',sysdate,null);
(10,98.0,'ENGL',sysdate,null);
(311,78.0,'ENGL',sysdate,null);
(312,88.0,'ENGL',sysdate,null);
(319,67.0,'ENGL',sysdate,null);
(322,89.0,'ENGL',sysdate,null);
(333,95.0,'ENGL',sysdate,null);
(343,91.0,'ENGL',sysdate,null);
(344,98.0,'ENGL',sysdate,null);
(345,87.0,'ENGL',sysdate,null);
(346,93.0,'ENGL',sysdate,null);
(347,99.0,'ENGL',sysdate,null);
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
(553,100.0,'MATH',sysdate-7,null);
(552,95.0,'MATH',sysdate-7,null);
(551,87.0,'MATH',sysdate-7,null);
(549,87.5,'MATH',sysdate-7,null);
(548,90.0,'MATH',sysdate-7,null);
(547,64.7,'MATH',sysdate-7,null);
(546,85.0,'MATH',sysdate-7,null);
(545,88.0,'MATH',sysdate-7,null);
(543,98.0,'MATH',sysdate-7,null);
(542,95.0,'MATH',sysdate-7,null);
(541,94.0,'MATH',sysdate-7,null);
(540,94.0,'MATH',sysdate-7,null);
(539,95.0,'MATH',sysdate-7,null);
(1,88.0,'MATH',sysdate-7,null);
(2,98.0,'MATH',sysdate-7,null);
(3,98.7,'MATH',sysdate-7,null);
(5,96.0,'MATH',sysdate-7,null);
(6,97.0,'MATH',sysdate-7,null);
(7,90.0,'MATH',sysdate-7,null);
(9,91.0,'MATH',sysdate-7,null);
(15,92.0,'MATH',sysdate-7,null);
(18,93.0,'MATH',sysdate-7,null);
(99,94.0,'MATH',sysdate-7,null);
(100,95.0,'MATH',sysdate-7,null);
(533,98.0,'MATH',sysdate-7,null);
(534,100.0,'MATH',sysdate-7,null);
(535,100.0,'MATH',sysdate-7,null);
(536,99.0,'MATH',sysdate-7,null);
(537,88.0,'MATH',sysdate-7,null);
(530,88.0,'MATH',sysdate-7,null);
(531,88.0,'MATH',sysdate-7,null);
(103,67.0,'MATH',sysdate-7,null);
By Francisco Munoz Alvarez – August 2010
Page 18 of 25
19. How to Improve your Oracle career and make 2010
magic with Oracle
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
COMMIT;
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
(104,56.0,'MATH',sysdate-7,null);
(105,93.0,'MATH',sysdate-7,null);
(106,88.0,'MATH',sysdate-7,null);
(107,72.0,'MATH',sysdate-7,null);
(108,71.0,'MATH',sysdate-7,null);
(109,68.0,'MATH',sysdate-7,null);
(111,77.0,'MATH',sysdate-7,null);
(112,87.0,'MATH',sysdate-7,null);
(113,65.5,'MATH',sysdate-7,null);
(114,34.0,'MATH',sysdate-7,null);
(116,91.0,'MATH',sysdate-7,null);
(10,98.0,'MATH',sysdate-7,null);
(311,78.0,'MATH',sysdate-7,null);
(312,88.0,'MATH',sysdate-7,null);
(319,67.0,'MATH',sysdate-7,null);
(322,89.0,'MATH',sysdate-7,null);
(333,95.0,'MATH',sysdate-7,null);
(343,91.0,'MATH',sysdate-7,null);
(344,98.0,'MATH',sysdate-7,null);
(345,87.0,'MATH',sysdate-7,null);
(346,93.0,'MATH',sysdate-7,null);
(347,99.0,'MATH',sysdate-7,null);
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
(553,100.0,'BIOL',sysdate,null);
(552,95.0,'BIOL',sysdate,null);
(551,87.0,'BIOL',sysdate,null);
(549,87.5,'BIOL',sysdate,null);
(548,90.0,'BIOL',sysdate,null);
(547,64.7,'BIOL',sysdate,null);
(546,85.0,'BIOL',sysdate,null);
(545,88.0,'BIOL',sysdate,null);
(543,98.0,'BIOL',sysdate,null);
(542,95.0,'BIOL',sysdate,null);
(541,94.0,'BIOL',sysdate,null);
(540,94.0,'BIOL',sysdate,null);
(539,95.0,'BIOL',sysdate,null);
(1,88.0,'BIOL',sysdate,null);
(2,98.0,'BIOL',sysdate,null);
(3,98.7,'BIOL',sysdate,null);
(5,96.0,'BIOL',sysdate,null);
(6,97.0,'BIOL',sysdate,null);
(7,90.0,'BIOL',sysdate,null);
(9,91.0,'BIOL',sysdate,null);
(15,92.0,'BIOL',sysdate,null);
(18,93.0,'BIOL',sysdate,null);
(99,94.0,'BIOL',sysdate,null);
(100,95.0,'BIOL',sysdate,null);
(533,98.0,'BIOL',sysdate,null);
(534,100.0,'BIOL',sysdate,null);
(535,100.0,'BIOL',sysdate,null);
(536,99.0,'BIOL',sysdate,null);
(537,88.0,'BIOL',sysdate,null);
(530,88.0,'BIOL',sysdate,null);
(531,88.0,'BIOL',sysdate,null);
(103,67.0,'BIOL',sysdate,null);
(104,56.0,'BIOL',sysdate,null);
(105,93.0,'BIOL',sysdate,null);
(106,88.0,'BIOL',sysdate,null);
(107,72.0,'BIOL',sysdate,null);
(108,71.0,'BIOL',sysdate,null);
(109,68.0,'BIOL',sysdate,null);
(111,77.0,'BIOL',sysdate,null);
(112,87.0,'BIOL',sysdate,null);
(113,65.5,'BIOL',sysdate,null);
(114,34.0,'BIOL',sysdate,null);
(116,91.0,'BIOL',sysdate,null);
(10,98.0,'BIOL',sysdate,null);
(311,78.0,'BIOL',sysdate,null);
By Francisco Munoz Alvarez – August 2010
Page 19 of 25
20. How to Improve your Oracle career and make 2010
magic with Oracle
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
COMMIT;
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
INSERT INTO
COMMIT;
END;
/
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
(312,88.0,'BIOL',sysdate,null);
(319,67.0,'BIOL',sysdate,null);
(322,89.0,'BIOL',sysdate,null);
(333,95.0,'BIOL',sysdate,null);
(343,91.0,'BIOL',sysdate,null);
(344,98.0,'BIOL',sysdate,null);
(345,87.0,'BIOL',sysdate,null);
(346,93.0,'BIOL',sysdate,null);
(347,99.0,'BIOL',sysdate,null);
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
GRADES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
VALUES
(553,100.0,'ARTS',sysdate,null);
(552,95.0,'ARTS',sysdate,null);
(551,87.0,'ARTS',sysdate,null);
(549,87.5,'ARTS',sysdate,null);
(548,90.0,'ARTS',sysdate,null);
(547,64.7,'ARTS',sysdate,null);
(546,85.0,'ARTS',sysdate,null);
(545,88.0,'ARTS',sysdate,null);
(543,98.0,'ARTS',sysdate,null);
(542,95.0,'ARTS',sysdate,null);
(541,94.0,'ARTS',sysdate,null);
(540,94.0,'ARTS',sysdate,null);
(539,95.0,'ARTS',sysdate,null);
(1,88.0,'ARTS',sysdate,null);
(2,98.0,'ARTS',sysdate,null);
(3,98.7,'ARTS',sysdate,null);
(5,96.0,'ARTS',sysdate,null);
(6,97.0,'ARTS',sysdate,null);
(7,90.0,'ARTS',sysdate,null);
(9,91.0,'ARTS',sysdate,null);
(15,92.0,'ARTS',sysdate,null);
(18,93.0,'ARTS',sysdate,null);
(99,94.0,'ARTS',sysdate,null);
(100,95.0,'ARTS',sysdate,null);
(533,98.0,'ARTS',sysdate,null);
(534,100.0,'ARTS',sysdate,null);
(535,100.0,'ARTS',sysdate,null);
(536,99.0,'ARTS',sysdate,null);
(537,88.0,'ARTS',sysdate,null);
(530,88.0,'ARTS',sysdate,null);
(531,88.0,'ARTS',sysdate,null);
(103,67.0,'ARTS',sysdate,null);
(104,56.0,'ARTS',sysdate,null);
(105,93.0,'ARTS',sysdate,null);
(106,88.0,'ARTS',sysdate,null);
(107,72.0,'ARTS',sysdate,null);
(108,71.0,'ARTS',sysdate,null);
(109,68.0,'ARTS',sysdate,null);
(111,77.0,'ARTS',sysdate,null);
(112,87.0,'ARTS',sysdate,null);
(113,65.5,'ARTS',sysdate,null);
(114,34.0,'ARTS',sysdate,null);
(116,91.0,'ARTS',sysdate,null);
(10,98.0,'ARTS',sysdate,null);
(311,78.0,'ARTS',sysdate,null);
(312,88.0,'ARTS',sysdate,null);
(319,67.0,'ARTS',sysdate,null);
(322,89.0,'ARTS',sysdate,null);
(333,95.0,'ARTS',sysdate,null);
(343,91.0,'ARTS',sysdate,null);
(344,98.0,'ARTS',sysdate,null);
(345,87.0,'ARTS',sysdate,null);
(346,93.0,'ARTS',sysdate,null);
(347,99.0,'ARTS',sysdate,null);
By Francisco Munoz Alvarez – August 2010
Page 20 of 25
21. How to Improve your Oracle career and make 2010
magic with Oracle
Now let simulate that you found the SQL bellow running in your Database:
SQL> Explain plan for
select student_name,avg(a.grade) from grades a, students b
where b.student_social_security = 100101016
and
b.student_id = a.student_id
group by student_name
/
STUDENT_NAME
AVG(A.GRADE)
--------------------------------------------- -----------JOHN KEY
71
SQL> SELECT * FROM TABLE(dbms_xplan.display);
Plan hash value: 3187331965
--------------------------------------------------------------------------------| Id | Operation
| Name
| Rows | Bytes | Cost (%CPU)| Time
|
--------------------------------------------------------------------------------|
0 | SELECT STATEMENT
|
|
4 |
304 |
5 (20)| 00:00:01 |
|
1 | HASH GROUP BY
|
|
4 |
304 |
5 (20)| 00:00:01 |
|
2 |
NESTED LOOPS
|
|
4 |
304 |
4
(0)| 00:00:01 |
|* 3 |
TABLE ACCESS FULL| STUDENTS |
1 |
50 |
3
(0)| 00:00:01 |
|* 4 |
INDEX RANGE SCAN | PK_GRADES |
4 |
104 |
1
(0)| 00:00:01 |
--------------------------------------------------------------------------------Predicate Information (identified by operation id):
--------------------------------------------------3 - filter("B“.”STUDENT_SOCIAL_SECURITY“=100101016)
4 - access("B“.”STUDENT_ID“=”A“.”STUDENT_ID“)
Note
----- dynamic sampling used for this statement
You can see that the SQL above is doing a full scan to the students table, after a few
modifications we have another SQL, a little more efficient, and it will be:
SQL> Explain plan for
select student_name,avg(a.grade) from grades a, students b
where b.student_id = 108
and
b.student_id = a.student_id
group by student_name
/
STUDENT_NAME
AVG(A.GRADE)
--------------------------------------------- -----------JOHN KEY
71
Execution Plan
----------------------------------------------------------
Plan hash value: 3300694555
-------------------------------------------------------------------------------------------| Id | Operation
| Name
| Rows | Bytes | Cost (%CPU)| Time
|
--------------------------------------------------------------------------------------------
By Francisco Munoz Alvarez – August 2010
Page 21 of 25
22. How to Improve your Oracle career and make 2010
magic with Oracle
|
0 | SELECT STATEMENT
|
|
4 |
252 |
2
(0)| 00:00:01 |
|
1 | HASH GROUP BY
|
|
4 |
252 |
2
(0)| 00:00:01 |
|
2 |
NESTED LOOPS
|
|
4 |
252 |
2
(0)| 00:00:01 |
|
3 |
TABLE ACCESS BY INDEX ROWID| STUDENTS
|
1 |
37 |
1
(0)| 00:00:01 |
|* 4 |
INDEX UNIQUE SCAN
| PK_STUDENT |
1 |
|
1
(0)| 00:00:01 |
|* 5 |
INDEX RANGE SCAN
| PK_GRADES |
4 |
104 |
1
(0)| 00:00:01 |
--------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
--------------------------------------------------4 - access("B“.”STUDENT_ID“=108)
5 - access("A“.”STUDENT_ID“=108)
Note
----- dynamic sampling used for this statement
You can see that the second SQL is more efficient that the first one, and for that reason
we will order Oracle to replace the bad SQL for the good one every time the bad SQL is
executed, how we can do it? Easily, the magic will be:
SQL> ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED;
SQL> BEGIN
sys.dbms_advanced_rewrite.declare_rewrite_equivalence (
name
=> 'test_rw1',
source_stmt =>
'select student_name,avg(a.grade)
from grades a, students b
where b.student_social_security = 100101016
and b.student_id = a.student_id
group by student_name',
destination_stmt =>
'select student_name,avg(a.grade)
from grades a, students b
where b.student_id = 108
and
b.student_id = a.student_id
group by student_name',
validate => false,
rewrite_mode
=> 'text_match');
END;
/
Let’s now see if the magic really works:
SQL> Explain plan for
select student_name,avg(a.grade) from grades a, students b
where b.student_social_security = 100101016
and
b.student_id = a.student_id
group by student_name
/
STUDENT_NAME
AVG(A.GRADE)
--------------------------------------------- -----------JOHN KEY
71
SQL> SELECT * FROM TABLE(dbms_xplan.display);
Execution Plan
---------------------------------------------------------Plan hash value: 3300694555
-------------------------------------------------------------------------------------------| Id | Operation
| Name
| Rows | Bytes | Cost (%CPU)| Time
|
-------------------------------------------------------------------------------------------|
0 | SELECT STATEMENT
|
|
4 |
252 |
2
(0)| 00:00:01 |
By Francisco Munoz Alvarez – August 2010
Page 22 of 25
23. How to Improve your Oracle career and make 2010
magic with Oracle
|
1 | HASH GROUP BY
|
|
4 |
252 |
2
(0)| 00:00:01 |
|
2 |
NESTED LOOPS
|
|
4 |
252 |
2
(0)| 00:00:01 |
|
3 |
TABLE ACCESS BY INDEX ROWID| STUDENTS
|
1 |
37 |
1
(0)| 00:00:01 |
|* 4 |
INDEX UNIQUE SCAN
| PK_STUDENT |
1 |
|
1
(0)| 00:00:01 |
|* 5 |
INDEX RANGE SCAN
| PK_GRADES |
4 |
104 |
1
(0)| 00:00:01 |
--------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
--------------------------------------------------4 - access("B“.”STUDENT_ID“=108)
5 - access("A“.”STUDENT_ID“=108)
Note
----- dynamic sampling used for this statement
The magic is done, now every time the source_stmt is execute it will be replaced by the
destination_stmt with a better execution plan
Enjoy this amazing trick!
The 3 DBAs
To finish this paper, I’ll like to explain what are in my opinion the 3 kind of DBAs we
have currently available in the IT market:
The Firefighter DBA
The Proactive DBA
The Balanced DBA
Now let’s see what each type of DBA means for me.
a) The Firefighter
Is a DBA who is constantly fighting a fire at the time, this kind of DBA have a
great knowledge in how to solve problems, but due that he or she is always busy
solving problems never have enough time to do nothing else. This DBA forget
what is to be proactive, and in most of the cases, the problems that keep him or
she busy all the time are being caused by themselves, due that they are not taking
care of the Database as required and only are fixing something when something
go wrong, they use the mentality “if is not broken, don’t touch it”. This kind of
DBA loves the adrenaline and love to be called the hero at the end of the day, but
hate to be on call due to all calls received when on call!
b) The Proactive
By Francisco Munoz Alvarez – August 2010
Page 23 of 25
24. How to Improve your Oracle career and make 2010
magic with Oracle
This kind of DBA, is always looking to solve possible problems before it become
a real problem, is always running checklists and setting his or her monitoring
system to use proactive thresholds, this way avoiding to receive unnecessary calls
during nights and weekends. Their Databases are stable most or all the time and
due to this situation most of this kind of DBAs forgets what to do in case of a
serious emergency situation. Another common complain this kind of DBA have,
is that they do not receive any recognition from their bosses, , some were also
fired or replaced due that their company think that they don’t need a DBA
because nothing bad never happen to their Databases.
c) The Balanced
You can easily see that the two previous stereotypes have a good side and a bad
side of their behaviors, the final type of DBA, the Balanced one will take the good
side of each previous stereotype, finally creating the ideal DBA.
How can this be possible?
Easily, The Balanced DBA, will behave as the proactive DBA, but will use his
free time to keep learning and practicing firefighter skills. Just like my previous
example regarding proactive backup early in this paper. This kind of DBA will
practice backup & recovery in a test environment from any backup on tape. This
exercise will allow the DBA to test and improve his/her recovery skills, test if the
tapes are ok and know exactly what to do in case of a recover be necessary.
Also this kind of DBA is constantly practicing tutorials, like the ones from Oracle
by Example (OBE) at OTN, to improve his/her soft skills and learn new
techniques. And to avoid any possible wrong interpretation of their work, this
kind of DBA is constantly involving his/her boss in what is happening, and
always generating a monthly report showing:
Current Status of Databases
Possible problems detected during the month
Problems avoided/Risk managed
Resume of daily checklists
Status of Backups
Result of Simulations Tests (recover, OBE, etc)
By Francisco Munoz Alvarez – August 2010
Page 24 of 25
25. How to Improve your Oracle career and make 2010
magic with Oracle
With this simple report, your company will see that you really care about their
data, and will understand what you are doing for them!
After learn about this 3 type of DBA in the market, can you tell what type are you?
And what can you do to improve your DBA career?
By Francisco Munoz Alvarez – August 2010
Page 25 of 25