This document provides guidance on tuning and monitoring SQL queries in Tibero. It describes how to view SQL execution results using SQL trace files generated by setting the SQL_TRACE parameter or using the SET_SQL_TRACE_IN_SESSION procedure. It also explains how to use the tbPROF command to analyze the output trace files and view statistics on query parsing, execution and fetching. Additional topics covered include using AUTOTRACE in tbSQL, the V$SQL_PLAN view, SQL_TRACE_DEST parameter and privilege requirements for AUTOTRACE.
Python Django tutorial | Getting Started With Django | Web Development With D...Edureka!
This tutorial will help to learn what Django framework is and how it is used for web development. Below are the topics covered in this Python Django tutorial:
1. What is a Web Framework?
2. Why Python Django?
3. What is Django?
4. Companies using Django
5. Django Installation
6. Django MVC- MVT Pattern
7. Demo - Get Started with Django
Python Django tutorial | Getting Started With Django | Web Development With D...Edureka!
This tutorial will help to learn what Django framework is and how it is used for web development. Below are the topics covered in this Python Django tutorial:
1. What is a Web Framework?
2. Why Python Django?
3. What is Django?
4. Companies using Django
5. Django Installation
6. Django MVC- MVT Pattern
7. Demo - Get Started with Django
So you're an Oracle DBA or database developer and you've been hearing about this "REST-thing" and apparently it's way cool for exchanging data. This session for any database folk who missed the trend, covers at a high level what this REST thing is all about, then takes a look at Oracle REST Data Services (ORDS) that allows you to expose your database objects via HTTP, and then walk through how Oracle SQL Developer makes this a breeze to setup.
This Edureka "Node.js Express tutorial" will help you to learn the Node.js express fundamentals with examples. Express.js is flexible and minimal node.js web application framework that provides robust set of features to develop mobile and web applications. It facilitates the rapid development of node.js applications. Below are the topics covered in this tutorial:
1) Why Express.js?
2) What is Express.js?
3) Express Installation
4) Express Routes
5) Express Middlewares
pg_proctab: Accessing System Stats in PostgreSQLMark Wong
pg_proctab is a collection of PostgreSQL stored functions that provide access to the operating system process table using SQL. We'll show you which functions are available and where they collect the data, and give examples of their use to collect processor and I/O statistics on SQL queries.
So you're an Oracle DBA or database developer and you've been hearing about this "REST-thing" and apparently it's way cool for exchanging data. This session for any database folk who missed the trend, covers at a high level what this REST thing is all about, then takes a look at Oracle REST Data Services (ORDS) that allows you to expose your database objects via HTTP, and then walk through how Oracle SQL Developer makes this a breeze to setup.
This Edureka "Node.js Express tutorial" will help you to learn the Node.js express fundamentals with examples. Express.js is flexible and minimal node.js web application framework that provides robust set of features to develop mobile and web applications. It facilitates the rapid development of node.js applications. Below are the topics covered in this tutorial:
1) Why Express.js?
2) What is Express.js?
3) Express Installation
4) Express Routes
5) Express Middlewares
pg_proctab: Accessing System Stats in PostgreSQLMark Wong
pg_proctab is a collection of PostgreSQL stored functions that provide access to the operating system process table using SQL. We'll show you which functions are available and where they collect the data, and give examples of their use to collect processor and I/O statistics on SQL queries.
Mark Wong
pg_proctab is a collection of PostgreSQL stored functions that provide access to the operating system process table using SQL. We'll show you which functions are available and where they collect the data, and give examples of their use to collect processor and I/O statistics on SQL queries.
pg_proctab: Accessing System Stats in PostgreSQLMark Wong
pg_proctab is a collection of PostgreSQL stored functions that provide access to the operating system process table using SQL. We'll show you which functions are available and where they collect the data, and give examples of their use to collect processor and I/O statistics on SQL queries. These stored functions currently only work on Linux-based systems.
pg_proctab: Accessing System Stats in PostgreSQLMark Wong
pc_proctab is a collection of PostgreSQL stored functions that allow you to access the operating system process table using SQL. See examples on how to use these stored functions to collect processor and I/O statistics on SQL statements run against the database.
Basic Query Tuning Primer - Pg West 2009mattsmiley
Intro to query tuning in Postgres, for beginners or intermediate software developers. Lists your basic toolkit, common problems, a series of examples. Assumes the audience knows basic SQL but has little or no experience with reading or adjusting execution plans. Accompanies 45-90 minute talk; meant to encourage Q/A.
Matt Smiley
This is a basic primer aimed primarily at developers or DBAs new to Postgres. The format is a Q/A style tour with examples, based on common questions and pitfalls. Begin with a quick tour of relevant parts of the postgres catalog, with an aim to answer simple but important questions like:
How many rows does the optimizer think my table has?
When was it last analyzed?
Which other tables also have a column named "foo"?
How often is this index used?
Your tuning arsenal: AWR, ADDM, ASH, Metrics and AdvisorsJohn Kanagaraj
Oracle Database 10g brought in a slew of tuning and performance related tools and indeed a new way of dealing with performance issues. Even though 10g has been around for a while, many DBAs haven’t really used many of the new features, mostly because they are not well known or understood. In this Expert session, we will look past the slick demos of the new tuning and performance related tools and go “under the hood”. Using this knowledge, we will bypass the GUI and look at the views and counters that matter and quickly understand what they are saying. Tools covered include AWR, ADDM, ASH, Metrics, Tuning Advisors and their related views. Much of information about Oracle Database 10g presented in this paper has been adapted from my book and I acknowledge that with gratitude to my publisher - SAMS (Pearson).
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
2. Tuning and Monitoring
Better Technology, Better Tomorrow
2
Table of Contents
1. Viewing SQL Query Execution Result using SQL Trace and tbPROF...........................................................................................................3
1.1. Using the SQL_TRACE parameter.............................................................................................................................................................3
1.1.1. Setting the SQL_TRACE parameters...........................................................................................................................................3
1.1.2. Verifying the Created SQL Trace File..........................................................................................................................................5
1.1.3. Executing the tbPROF Command................................................................................................................................................5
1.1.4. Analyzing the Output File after Executing tbPROF.................................................................................................................6
1.2. Using the SET_SQL_TRACE_IN_SESSION Procedure............................................................................................................................8
2. Using Autotrace in tbSQL...................................................................................................................................................................................12
3. Using V$SQL_PLAN .............................................................................................................................................................................................16
4. Others.....................................................................................................................................................................................................................18
4.1. The SQL_TRACE_DEST Parameter.........................................................................................................................................................18
4.2. Privilege Issues when using AUTOTRACE in TBSQL ..........................................................................................................................18
3. Tuning and Monitoring
Better Technology, Better Tomorrow
3
Tibero SQL Execution Plan Guide
1. Viewing SQL Query Execution Results using SQL Trace and tbPROF
To collect SQL execution information, SQL trace files can be created statically or dynamically using SQL_TRACE
parameters.
- SQL statement executions are traced.
- A target statement for tuning can be easily found in the execution plan.
- Since the contents of a SQL trace are stored in a file, a query process can be slow.
1.1. Using the SQL_TRACE parameter
1.1.1. Setting the SQL_TRACE parameters
SQL_TRACE determines whether to record SQL trace information. An SQL trace enables profiling of SQL execution
information. However, using this function can severely impact performance, so it is recommended not to use this
function in the real production environment.
Property Description
Type Boolean
Default Value N
Class Optional, Adjustable, Dynamic, Session
Configuration Method Configure the TIP file and restart the server, or change the value using the
ALTER statement.
Syntax • • TIP File
- SQL_TRACE={Y|N}
• • ALTER Statement
- ALTER SYSTEM SET SQL_TRACE={Y|N}
- ALTER SESSION SET SQL_TRACE={Y|N}
Examples
-- Query the SQL_TRACE information in V$SESSION.
SQL> SELECT SID, SERIAL#, USERNAME, SQL_TRACE, TO_CHAR(LOGON_TIME, 'DD HH24:MI:SS')
LOGON_TIME
FROM V$SESSION
ORDER BY LOGON_TIME;
SID SERIAL# USERNAME SQL_TRACE LOGON_TIME
---------- ---------- --------- --------- -----------
9 4 SYS ENABLED 18 15:43:29
20 83 SYS ENABLED 18 15:44:47
19 567 SYS DISABLED 18 15:52:50
3 rows selected.
-- Use a text editor to modify the %$TB_HOME%/config/%TB_SID%.tip file.
-- Set SQL_TRACE=Y and restart Tibero via tbboot.
#-------------------------------------------------------------------------------
#
# RDBMS initialization parameter
#
4. Tuning and Monitoring
Better Technology, Better Tomorrow
4
#-------------------------------------------------------------------------------
DB_NAME=tibero
LISTENER_PORT=8629
CONTROL_FILES="C:/Tibero/tbdata/c1.ctl"
CERTIFICATE_FILE="C:/Tibero/tibero5/config/svr_wallet/tibero.crt"
#PRIVKEY_FILE="C:/Tibero/tibero5/config/svr_wallet/tibero.key"
#WALLET_FILE="C:/Tibero/tibero5/config/svr_wallet/WALLET"
DB_CREATE_FILE_DEST=C:/Tibero/tbdata
LOG_ARCHIVE_DEST= C:/Tibero/arch
MAX_SESSION_COUNT=10
TOTAL_SHM_SIZE=512M
UNDO_RETENTION=900
_TSN_TIME_MAP_SIZE=1000
SQL_TRACE=Y
---------------------------------------------------------------------------------
-- Access tbSQL.
$ tbsql sys/tibero
tbSQL 5
Copyright (c) 2008, 2009, 2011, 2012 Tibero Corporation. All rights reserved.
Connected to Tibero.
-- Verify that SQL_TRACE has been set to Y using the V$SESSION view.
SQL> SELECT SID, SERIAL#, USERNAME, SQL_TRACE, TO_CHAR(LOGON_TIME, 'DD HH24:MI:SS')
LOGON_TIME
FROM V$SESSION
ORDER BY LOGON_TIME;
SID SERIAL# USERNAME SQL_TRACE LOGON_TIME
---------- ---------- --------- --------- -----------
9 4 SYS ENABLED 18 15:43:29
20 83 SYS ENABLED 18 15:44:47
19 567 SYS ENABLED 18 15:52:50
3 rows selected.
-- Set SQL_TRACE=Y and execute an SQL query.
SQL> SELECT e.employee_id, e.first_name||' '||e.last_name emp_name, d.department_id,
d.department_name
FROM departments d
LEFT OUTER JOIN employees e
ON(e.department_id=d.department_id);
5. Tuning and Monitoring
Better Technology, Better Tomorrow
5
1.1.2. Verifying the Created SQL Trace File
The trace file is placed in the following directory with the '.trc' extension.
File Location: $TB_HOME/instance/$TB_SID/log/sqltrace
File Naming Rule
- A file name is created using the PID, SID, and serial# of the session.
E.g., tb_sqltrc_PID_SID_serial#.trc
- The PID, SID, and serial# information can be queried using the v$session view.
-- Query the PID, SID, and serial# information using the V$SESSION view.
SQL> SELECT PID, SID, SERIAL#, USERNAME, SQL_TRACE, TO_CHAR(LOGON_TIME, 'DD
HH24:MI:SS') LOGON_TIME
FROM V$SESSION
ORDER BY LOGON_TIME;
PID SID SERIAL# USERNAME SQL_TRACE LOGON_TIME
---------- ---------- --------- --------- ---------- ------------
3179 9 4 SYS ENABLED 18 15:43:29
3180 20 83 SYS ENABLED 18 15:44:47
3181 19 567 SYS ENABLED 18 15:52:50
3 rows selected.
-- Check the tb_sqltrc_3181_19_567.trc file created in the
$TB_HOME/instance/$TB_SID/log/sqltrace directory.
$ cd tibero5/instance/tibero/log/sqltrace
$ ls
tb_sqltrc_3180_1_3.trc tb_sqltrc_3181_20_3524.trc
tb_sqltrc_3180_9_4.trc tb_sqltrc_3181_20_83.trc
tb_sqltrc_3181_19_2766.trc tb_sqltrc_3181_21_7024.out
tb_sqltrc_3181_19_442.trc tb_sqltrc_3181_21_7024.trc
tb_sqltrc_3181_19_50.trc tb_sqltrc_3181_21_7829.trc
tb_sqltrc_3181_19_567.trc tb_sqltrc_3183_4_0.trc
1.1.3. Executing the tbPROF Command
This command provides the information about the SQL in the parse, execution, and fetch stages. It sorts the
result of the user-specified field to display the result.
The following is an example of checking the tbPROF parameters and executing the tbPROF command.
Checking the tbPROF parameter
$ tbprof
Usage: tbprof tracefile outputfile [print= ] [sort= ] [aggregate= ]
print=integer List only the first 'integer' SQL statements.
sys=yes|no Filter SQL statements that 'SYS' user executes.
aggregate=yes|no Aggregate statistics of same SQL statements.
sort=option Set of zero or more of the following sort options:
prscnt number of times parse was called
prscpu cpu time parsing
prsela elapsed time parsing
prsdsk number of disk reads during parse
prsqry number of buffers for consistent read during parse
prscu number of buffers for current read during parse
execnt number of execute was called
6. Tuning and Monitoring
Better Technology, Better Tomorrow
6
execpu cpu time spent executing
exeela elapsed time executing
exedsk number of disk reads during execute
exeqry number of buffers for consistent read during execute
execu number of buffers for current read during execute
exerow number of rows processed during execute
fchcnt number of times fetch was called
fchcpu cpu time spent fetching
fchela elapsed time fetching
fchdsk number of disk reads during fetch
fchqry number of buffers for consistent read during fetch
fchcu number of buffers for current read during fetch
fchrow number of rows fetched
userid userid of user that parsed the cursor
Item Description Default
tracefile
Name of the file that contains statistical information created by the SQL
trace.
-
outputfile Uses tbPROF to format the contents of the trace file into a readable text file. -
print Prints the trace results of the specified number of SQL statements only. ALL
sys Determines whether to list the SQL statements issued by the user SYS. yes
aggregate Determines whether to aggregate the information of the same SQL. no
sort
Determines the result sort method.
- Depending on SORT_OPTION, sorts results in descending order.
- More than one option can be specified. Fields are separated by a comma
',' and cannot contain whitespace.
-
1.1.4. Analyzing the Output File after Executing tbPROF
Use tbPROF to analyze the information of the SQL executed. Check the information about the executed SQL
statements including statistical information and execution plans in the output file.
Creating an output file using tbPROF
$tbprof tbprof tb_sqltrc_3181_19_567.trc tb_sqltrc_3181_19_567.out sys=no
TBPROF 5
Copyright (c) 2008, 2009, 2011, 2012 Tibero Corporation. All rights reserved.
$
Checking the output file
$ vi tb_sqltrc_3181_19_567.out
TBPROF 5
Copyright (c) 2008, 2009, 2011, 2012 Tibero Corporation. All rights reserved.
input file name : tb_sqltrc_3181_21_7024.trc
output file name : tb_sqltrc_3181_21_7024.out
sort option : default
aggregate : no
7. Tuning and Monitoring
Better Technology, Better Tomorrow
7
sys : no
print : all
=================================================================
count: number of times the procedure was executed
cpu: CPU time(seconds).
this is not quite accurate due to threaded architecture
elapsed: Total time elapsed (seconds)
disk: number of physical reads from disk
query: number of blocks for consistent read
current: number of blocks in current mode
rows: number of rows processed
=================================================================
UserID:20
SELECT e.employee_id, e.first_name||' '||e.last_name emp_name, d.department_id,
d.department_name
FROM departments d
LEFT OUTER JOIN employees e
ON(e.department_id=d.department_id)
stage count cpu elapsed current query disk rows
-----------------------------------------------------------------------------
parse 1 0.00 0.00 0 0 0 0
exec 1 0.00 0.00 0 0 0 0
fetch 1 0.00 0.00 0 10 0 20
-----------------------------------------------------------------------------
sum 3 0.00 0.00 0 10 0 20
- cpu: CPU time. (sec)
- elapsed: Total time elapsed. (sec)
- current: Number of current blocks retrieved.
- query: Number of cr blocks retrieved.
- disk: Number of physical reads from disk.
- rows: Number of rows processed
rows execution plan
----------------------------------------------------------
20 index join (left outer) (et=93, cr=0, cu=0, co=4, cpu=0, ro=20)
8 table access (rowid) DEPARTMENTS(1749) (et=426, cr=1, cu=0, co=2, cpu=0, ro=8)
8 index (full) DEPT_ID_PK(1750) (et=52, cr=1, cu=0, co=1, cpu=0, ro=8)
19 table access (rowid) EMPLOYEES(1755) (et=354, cr=7, cu=0, co=2, cpu=0, ro=2)
19 index (range scan) EMP_DEPARTMENT_IX(1764) (et=125, cr=1, cu=0, co=1, cpu=0, ro=2)
*******************************************************************************
OVERALL TOTAL
stage count cpu elapsed current query disk rows
-----------------------------------------------------------------------------
parse 1 0.00 0.00 0 0 0 0
exec 1 0.00 0.00 0 0 0 0
fetch 1 0.00 0.00 0 10 0 20
-----------------------------------------------------------------------------
sum 3 0.00 0.00 0 10 0 20
*******************************************************************************
1 SQL statements in trace file.
1 unique SQL statements in trace file.
8. Tuning and Monitoring
Better Technology, Better Tomorrow
8
16 lines in trace file.
tbPROF Statistics Information
Call Value Description
Parse
- Converts the SQL statement into an execution plan, including checks for tables,
columns, and reference objects.
Execute
- Executes INSERT, UPDATE, and DELETE statements, which modify the data
according to the execution plan.
- Indicates the number of selected rows during execution of the statement.
Fetch
- Indicates the number of rows returned by a query.
- Fetches are performed for SELECT statements.
Argument Description
Count - Number of times an SQL statement was parsed, executed, or fetched.
CPU - CPU time spent to parse, execute, or fetch calls. (sec)
Elapsed - Total elapsed time, from the start to the end of a process.
Current
- Number of dirty blocks accessed. A dirty block is a block that have been modified
by a session but not yet written to the database.
- Dirty blocks seldom occur during the execution of SELECT statements, but most
often occur during the execution of UPDATE, INSERT, and DELETE statements.
Query
- Number of unchanged blocks that have not been read, or the number of
snapshot locks and copies of uncommitted data that have been read.
- This sometimes occurs during the execution of UPDATE, DELETE, and INSERT
statements, but often occurs during the execution of SELECT statements.
Disk - Number of data blocks read from the disk.
Rows
- Total number of rows accessed by the SQL statement.
- Does not include rows extracted by sub queries.
et - Time spent on the node. (usec)
cr - Number of cr blocks read from the node.
cu - Number of current blocks read from the node.
co - Cost of the nodes computed by the optimizer.
ro - Number of rows of the node estimated by the optimizer.
1.2. Using the SET_SQL_TRACE_IN_SESSION Procedure
Determining whether to create an SQL TRACE file
- Starts/stops the creation of a SQL trace log for a session. The identifier and the serial number of the
session can be found using the V$SESSION view.
- A target statement for tuning can be easily found in the execution plan.
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION
9. Tuning and Monitoring
Better Technology, Better Tomorrow
9
Parameter Type Description
SID NUMBER Identifier of the session.
SERIAL# NUMBER Serial number of the session.
SQL_TRACE BOOLEAN Specify as TRUE to create an SQL trace log, or specify false to stop.
SQL Syntax
- EXEC SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(SID, SERIAL#, SQL_TRACE);
- BEGIN
SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION (SID, SERIAL#, SQL_TRACE);
END
Examples
-- Access tbSQL.
$ tbsql sys/tibero
tbSQL 5
Copyright (c) 2008, 2009, 2011, 2012 Tibero Corporation. All rights reserved.
Connected to Tibero.
-- Check the SET_SQL_TRACE_IN_SESSION information.
SQL> DESC DBMS_SYSTEM
PROCEDURE 'SET_SQL_TRACE_IN_SESSION'
ARGUMENT_NAME TYPE IN/OUT
------------------------ ------------------ ----------------
SID NUMBER IN
SERIAL# NUMBER IN
SQL_TRACE BOOLEAN IN
-- Query the parameter values using the V$SESSION view.
SQL> SELECT SID, SERIAL#, USERNAME, SQL_TRACE, TO_CHAR(LOGON_TIME, 'DD HH24:MI:SS')
LOGON_TIME
FROM V$SESSION
ORDER BY LOGON_TIME;
SID SERIAL# USERNAME SQL_TRACE LOGON_TIME
---------- ---------- --------- --------- -----------
9 4 SYS ENABLED 18 15:43:29
20 83 SYS ENABLED 18 15:44:47
19 567 SYS DISABLED 18 15:52:50
3 rows selected.
-- Set SQL_TRACE to Y.
SQL> EXEC DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(19, 567, TRUE);
PSM completed.
-- Verify that SQL_TRACE has been set to Y.
SQL> SELECT SID, SERIAL#, USERNAME, SQL_TRACE, TO_CHAR(LOGON_TIME, 'DD HH24:MI:SS')
LOGON_TIME
10. Tuning and Monitoring
Better Technology, Better Tomorrow
10
FROM V$SESSION
ORDER BY LOGON_TIME;
SID SERIAL# USERNAME SQL_TRACE LOGON_TIME
---------- ---------- --------- --------- -----------
9 4 SYS ENABLED 18 15:43:29
20 83 SYS ENABLED 18 15:44:47
19 567 SYS ENABLED 18 15:52:50
3 rows selected.
-- Set SQL_TRACE to Y and execute an SQL query.
SQL> SELECT e.employee_id, e.first_name||' '||e.last_name emp_name, d.department_id,
d.department_name
FROM departments d
LEFT OUTER JOIN employees e
ON(e.department_id=d.department_id);
-- Query the PID, SID, and serial# information using the V$SESSION view.
SQL> SELECT PID, SID, SERIAL#, USERNAME, SQL_TRACE, TO_CHAR(LOGON_TIME, 'DD
HH24:MI:SS') LOGON_TIME
FROM V$SESSION
ORDER BY LOGON_TIME;
PID SID SERIAL# USERNAME SQL_TRACE LOGON_TIME
---------- ---------- --------- --------- ---------- ------------
3179 9 4 SYS ENABLED 18 15:43:29
3180 20 83 SYS ENABLED 18 15:44:47
3181 19 567 SYS ENABLED 18 15:52:50
3 rows selected.
-- Check the tb_sqltrc_3181_19_567.trc file created in the
$TB_HOME/instance/$TB_SID/log/sqltrace directory.
$ cd tibero5/instance/tibero/log/sqltrace
$ ls
tb_sqltrc_3180_1_3.trc tb_sqltrc_3181_20_3524.trc
tb_sqltrc_3180_9_4.trc tb_sqltrc_3181_20_83.trc
tb_sqltrc_3181_19_2766.trc tb_sqltrc_3181_21_7024.out
tb_sqltrc_3181_19_442.trc tb_sqltrc_3181_21_7024.trc
tb_sqltrc_3181_19_50.trc tb_sqltrc_3181_21_7829.trc
tb_sqltrc_3181_19_567.trc tb_sqltrc_3183_4_0.trc
-- Create an output file.
$tbprof tbprof tb_sqltrc_3181_19_567.trc tb_sqltrc_3181_19_567.out sys=no
TBPROF 5
Copyright (c) 2008, 2009, 2011, 2012 Tibero Corporation. All rights reserved.
$
-- Check the contents of the output file. (Statistical information)
$ vi tb_sqltrc_3181_19_567.out
11. Tuning and Monitoring
Better Technology, Better Tomorrow
11
TBPROF 5
Copyright (c) 2008, 2009, 2011, 2012 Tibero Corporation. All rights reserved.
input file name : tb_sqltrc_3181_21_7024.trc
output file name : tb_sqltrc_3181_21_7024.out
sort option : default
aggregate : no
sys : no
print : all
=================================================================
count: number of times the procedure was executed
cpu: CPU time(seconds)
this is not quite accurate due to threaded architecture
elapsed: elapsed time(seconds)
disk: number of physical reads from disk
query: number of blocks for consistent read
current: number of blocks in current mode
rows: number of rows processed
=================================================================
UserID:20
SELECT e.employee_id, e.first_name||' '||e.last_name emp_name, d.department_id,
d.department_name
FROM departments d
LEFT OUTER JOIN employees e
ON(e.department_id=d.department_id)
stage count cpu elapsed current query disk rows
-----------------------------------------------------------------------------
parse 1 0.00 0.00 0 0 0 0
exec 1 0.00 0.00 0 0 0 0
fetch 1 0.00 0.00 0 10 0 20
-----------------------------------------------------------------------------
sum 3 0.00 0.00 0 10 0 20
- cpu: CPU time. (sec)
- elapsed: Total time elapsed. (sec)
- current: Number of current blocks retrieved.
- query: Number of cr blocks retrieved.
- disk: Number of physical reads from the disk.
- rows: Number of rows processed.
rows execution plan
----------------------------------------------------------
20 index join (left outer) (et=93, cr=0, cu=0, co=4, cpu=0, ro=20)
8 table access (rowid) DEPARTMENTS(1749) (et=426, cr=1, cu=0, co=2, cpu=0, ro=8)
8 index (full) DEPT_ID_PK(1750) (et=52, cr=1, cu=0, co=1, cpu=0, ro=8)
19 table access (rowid) EMPLOYEES(1755) (et=354, cr=7, cu=0, co=2, cpu=0, ro=2)
19 index (range scan) EMP_DEPARTMENT_IX(1764) (et=125, cr=1, cu=0, co=1, cpu=0, ro=2)
*******************************************************************************
OVERALL TOTAL
stage count cpu elapsed current query disk rows
-----------------------------------------------------------------------------
parse 1 0.00 0.00 0 0 0 0
exec 1 0.00 0.00 0 0 0 0
fetch 1 0.00 0.00 0 10 0 20
12. Tuning and Monitoring
Better Technology, Better Tomorrow
12
-----------------------------------------------------------------------------
sum 3 0.00 0.00 0 10 0 20
*******************************************************************************
1 SQL statements in trace file.
1 unique SQL statements in trace file.
16 lines in trace file.
2. Using Autotrace in tbSQL
SQL Syntax
- SET AUTOT[RACE] {OFF|ON|TRACE[ONLY]} [EXP[LAIN]] [STAT[ISTICS]] [PLANS[TAT]]
Setting Description
SET AUTOTRACE OFF AUTOTRACE is not performed. (Default value)
SET AUTOTRACE ON
Shows the execution plan, execution results, and execution
statistics of the SQL statement.
SET AUTOTRACE TRACEONLY
Shows both the execution plan and the execution statistics of
the SQL statement. Fast processing.
SET AUTOTRACE ON EXPLAIN
Shows the execution results and the execution plan of the SQL
statement.
SET AUTOTRACE ON STATISTICS
Shows the execution results and the execution statistics of the
SQL statement.
SET AUTOTRACE ON PLANSTAT
Shows the execution results of the SQL statement and the
query executions per node.
(Time spent executing, number of rows processed, number of
executions)
- When AUTOTRACE is used, OFF, ON, or TRACE[ONLY] must be specified.
- [EXP[LAIN]] [STAT[ISTICS]] [PLANS[TAT]] shows the execution plan, execution statistics, and query
executions in each node, respectively. Execution information includes execution time, number of rows
processed, and number of executions. All, some, or none of the options can be selected.
- AUTOTRACE allows users to selectively view execution results, execution plans, or execution statistics
according to the option.
- Execution results and execution statistics are the results of query executions, and the execution plan
contains estimated values.
Privilege - DBA privilege or the PLUSTRACE role are required.
Shows Execution Result/Execution Plan/Execution Statistics/Execution Information
Category Description
SQL Query SQL> SELECT AVG(SALARY) AVG FROM employee GROUP BY DEPT_CD;
Execution Result
AVG
----------
4666.66667
1 row selected.
13. Tuning and Monitoring
Better Technology, Better Tomorrow
13
Execution Plan
SQL ID: 92
Plan Hash Value: 1910332783
Execution Plan
--------------------------------------------------------------
1 GROUP BY (HASH) (Cost: 26, %CPU: 0, Rows: 1)
2 TABLE ACCESS (FULL): EMPLOYEE (Cost: 26, %CPU: 0, Rows: 3)
Execution
Statistics
NAME VALUE
------------------------------ ---------------------------------------
db block gets 1
consistent gets 14
physical reads 0
redo size 0
sorts (disk) 0
sorts (memory) 0
rows processed 1
- db block gets: Number of times a current block was requested.
- consistent gets: Number of logical blocks read in Consistent mode.
- physical reads: Total number of data blocks read from disk.
- redo size: Size of the redo logs generated. (size)
- sorts (disk): Number of sort operations that were performed in disk.
- sorts (memory): Number of sort operations that were performed in memory.
- rows processed: Number of rows processed during the operation.
Execution
Information
SQL ID: 92
Plan Hash Value: 1910332783
Execution Stat
--------------------------------------------------
1 GROUP BY (HASH) (Time: .83 ms, Rows: 1, Starts: 1)
2 TABLE ACCESS (FULL): EMPLOYEE (Time: .15 ms, Rows: 3, Starts: 1)
Examples
-- Access TBSQL
$ tbsql sys/tibero
tbSQL 5
Copyright (c) 2008, 2009, 2011, 2012 Tibero Corporation. All rights reserved.
Connected to Tibero.
-- Enable the SET AUTOTRACE ON option.
SQL> SET AUTOTRACE ON;
-- Check the execution results, execution plan, and execution statistics of the SQL
query that was performed in the session.
SQL> SELECT AVG(SALARY) AVG FROM employee GROUP BY DEPT_CD;
AVG
14. Tuning and Monitoring
Better Technology, Better Tomorrow
14
----------
4666.66667
1 row selected.
SQL ID: 92
Plan Hash Value: 1910332783
Execution Plan
--------------------------------------------------------------------------------
1 GROUP BY (HASH) (Cost: 26, %CPU: 0, Rows: 1)
2 TABLE ACCESS (FULL): EMPLOYEE (Cost: 26, %CPU: 0, Rows: 3)
NAME VALUE
------------------------------ ----------
db block gets 1
consistent gets 14
physical reads 0
redo size 0
sorts (disk) 0
sorts (memory) 0
rows processed 1
-- Enable the SET AUTOTRACE TRACEONLY option
SQL> SET AUTOTRACE TRACEONLY;
-- Check the execution plan and the execution statistics of the SQL query that was
performed in the session.
SQL> SELECT AVG(SALARY) AVG FROM employee GROUP BY DEPT_CD;
SQL ID: 115
Plan Hash Value: 1910332783
Execution Plan
--------------------------------------------------------------------------------
1 GROUP BY (HASH) (Cost: 26, %CPU: 0, Rows: 1)
2 TABLE ACCESS (FULL): EMPLOYEE (Cost: 26, %CPU: 0, Rows: 3)
NAME VALUE
------------------------------ ----------
db block gets 1
consistent gets 14
physical reads 0
redo size 0
sorts (disk) 0
sorts (memory) 0
rows processed 1
-- Enable the SET AUTOTRACE ON EXPLAIN option
SQL> SET AUTOTRACE ON EXPLAIN;
-- Check the execution results and the execution plan of the SQL query that was
performed in the session.
SQL> SELECT AVG(SALARY) AVG FROM employee GROUP BY DEPT_CD;
AVG
15. Tuning and Monitoring
Better Technology, Better Tomorrow
15
----------
4666.66667
1 row selected.
SQL ID: 92
Plan Hash Value: 1910332783
Execution Plan
--------------------------------------------------------------------------------
1 GROUP BY (HASH) (Cost:26, %CPU:0, Rows:1)
2 TABLE ACCESS (FULL): EMPLOYEE (Cost:26, %CPU:0, Rows:3)
-- Enable the SET AUTOTRACE ON STATISTICS option
SQL> SET AUTOTRACE ON STATISTICS;
-- Check the execution results and the execution statistics of the SQL query that was
performed in the session.
SQL> SELECT AVG(SALARY) AVG FROM employee GROUP BY DEPT_CD;
AVG
----------
4666.66667
1 row selected.
NAME VALUE
------------------------------ ----------
db block gets 1
consistent gets 14
physical reads 0
redo size 0
sorts (disk) 0
sorts (memory) 0
rows processed 1
-- Enable the SET AUTOTRACE ON PLANSTAT option
SQL> SET AUTOTRACE ON PLANSTAT;
-- Check the execution results and the execution information of the SQL query
performed in the session.
SQL> SELECT AVG(SALARY) AVG FROM employee GROUP BY DEPT_CD;
AVG
----------
4666.66667
1 row selected.
SQL ID: 92
Plan Hash Value: 1910332783
Execution Stat
--------------------------------------------------------------------------------
1 GROUP BY (HASH) (Time: .83 ms, Rows: 1, Starts: 1)
16. Tuning and Monitoring
Better Technology, Better Tomorrow
16
2 TABLE ACCESS (FULL): EMPLOYEE (Time: .15 ms, Rows: 3, Starts: 1)
-- Enable the SET AUTOTRACE OFF option
SQL> SET AUTOTRACE OFF;
-- Check the execution results of the SQL query performed in the session and the
result of enabling the AUTOTRACE OFF option.
SQL> SELECT AVG(SALARY) AVG FROM employee GROUP BY DEPT_CD;
AVG
----------
4666.66667
1 row selected.
3. Using V$SQL_PLAN
Displays information about the physical plan for executing SQL statements.
Columns Information
Column Name Data Type Description
HASH_VALUE NUMBER Hash value of the physical plan
SQL_ID NUMBER SQL identifier
OPERATION VARCHAR(128) Name of an operation job
OBJECT# NUMBER Identifier of an object accessed by the job
OBJECT_OWNER VARCHAR(128) Name of the user who owns the object
OBJECT_NAME VARCHAR(128) Object name
OBJECT_TYPE VARCHAR(20) Object type
ID NUMBER Unique number assigned to each task of the physical plan
PARENT_ID NUMBER
ID of the next execution step that operates on the output of
the current step
DEPTH NUMBER Tree level of the physical plan
POSITION NUMBER Position of all operations with the same PARENT_ID
SEARCH_COLUMNS NUMBER Number of keys used for index search
COST NUMBER Cost of an operation estimated by the optimizer
CPU_COST NUMBER CPU cost of an operation estimated by the optimizer
17. Tuning and Monitoring
Better Technology, Better Tomorrow
17
IO_COST NUMBER I/O cost of an operation estimated by the optimizer
CARDINALITY NUMBER Number of rows to display estimated by the optimizer
PSTART VARCHAR(38) First partition to be accessed in the partition table.
PEND VARCHAR(38) Last partition to be accessed in the partition table.
OTHERS VARCHAR(4000)
Execution step related information, which a user can flexibly
use.
ACCESS_PREDICATES VARCHAR(4000) Predicate information for index accesses or join operations
FILTER_PREDICATES VARCHAR(4000) Predicate information for a filter processing
Examples
-- Execute an SQL query.
SQL> SELECT JOB_ID, round(AVG(SALARY)) AVG
FROM EMPLOYEES
GROUP BY JOB_ID order by 2 desc;
JOB_ID AVG
---------- ----------
AD_PRES 24000
AD_VP 17000
MK_MAN 13000
AC_MGR 12000
SA_MAN 10500
SA_REP 8867
AC_ACCOUNT 8300
IT_PROG 6400
MK_REP 6000
ST_MAN 5800
AD_ASST 4400
ST_CLERK 2925
12 rows selected.
-- Search for an SQL_ID using partial character sets of the SQL query.
SQL> select sql_id, sql_text
from v$sqltext
where sql_text like '%FROM EMPLOYEES%';
SQL_ID SQL_TEXT
---------- ----------------------------------------------------------------
521 SELECT JOB_ID, round(AVG(SALARY)) AVG FROM EMPLOYEES GROUP BY JO
-- How to view the full-sql text.
(When querying v$sqltext, the sql text is broken into pieces.)
SQL> select sql_id, aggr_concat(sql_text, '' order by piece) as sql
from v$sqltext
where sql_id=521 group by sql_id;
SQL_ID SQL
---------- --------------------------------------------------------------------------
521 SELECT JOB_ID, round(AVG(SALARY)) AVG FROM EMPLOYEES GROUP BY JOB_ID
18. Tuning and Monitoring
Better Technology, Better Tomorrow
18
order by 2 desc
1 row selected.
-- Identify the SQL query using V$SQL_PLAN. (with the SQL_ID searched.)
SQL> SELECT SUBSTRB(TO_CHAR(ID), 1, 3) || LPAD(' ', LEVEL * 2) || UPPER(OPERATION) ||
DECODE(OBJECT_NAME, NULL, NULL, ': '||OBJECT_NAME) || ' (Cost:' || COST ||
', %%CPU:' || DECODE(COST, 0, 0, TRUNC((COST-IO_COST)/COST * 100))|| ', Rows:'
|| CARDINALITY || ') ' || DECODE(PSTART, '', '', '(PS:' || PSTART || ', PE:' ||
PEND || ')') AS "Execution Plan"
FROM (SELECT * FROM V$SQL_PLAN WHERE SQL_ID = 521)
START WITH DEPTH = 1
CONNECT BY PRIOR ID = PARENT_ID AND PRIOR SQL_ID = SQL_ID
ORDER SIBLINGS BY POSITION;
Execution Plan
-----------------------------------------------------------------------------
1 ORDER BY (SORT) (Cost:26, %%CPU:0, Rows:12)
2 GROUP BY (HASH) (Cost:26, %%CPU:0, Rows:12)
3 TABLE ACCESS (FULL): EMPLOYEES (Cost:26, %%CPU:0, Rows:20)
3 rows selected
4. Others
4.1. The SQL_TRACE_DEST Parameter
SQL_TRACE_DEST specifies the directory in which the SQL trace file is stored. It must be specified as an absolute
path.
SQL trace files are generally stored in the $TB_HOME/instance/$TB_SID/log/sqltrace directory. The directory can
be modified using the following parameters.
Property Description
Type String
Default Value ""
Class Optional, Adjustable, Dynamic, System
Configuration method Set in the TIP file and restart the server, or change the directory using the
ALTER statement.
Syntax • • TIP File
- SQL_TRACE_DEST=<Directory>
• • ALTER Statement
- ALTER SYSTEM SET SQL_TRACE_DEST=<Directory>
4.2. Privilege Issues when using AUTOTRACE in TBSQL
Notes
The following message appears when an SQL query is performed without the PLUSTRACE role. The
PLUSTRACE role must be granted to the user.
-- Access TBSQL and apply the Explain option.
SQL> set autotrace on explain;
19. Tuning and Monitoring
Better Technology, Better Tomorrow
19
-- The following message occurs when an SQL query is performed in the session without
the PLUSTRACE role.
SQL> SELECT AVG(SALARY) AVG FROM employee GROUP BY DEPT_CD;
AVG
----------
4666.66667
1 row selected.
[B]TBR-8033: Specified schema object was not found.
at line 1, column 424:
"Execution Plan" FROM (SELECT * FROM V$SQL_PLAN WHERE SQL_ID = 115 AND HASH_V
^
TBS-70035: Unable to display plan: check PLUSTRACE role.
[B]TBR-8033: Specified schema object was not found.
at line 1, column 98:
RS AS "Remote SQL Information" FROM V$SQL_PLAN WHERE SQL_ID = 115 AND
^
TBS-70035: Unable to display plan: check PLUSTRACE role.
[B]TBR-8033: Specified schema object was not found.
at line 1, column 66:
with x as (select id, access_predicates, filter_predicates from v$sql_plan where
^
TBS-70035: Unable to display plan: check PLUSTRACE role.
-- Create the PLUSTRACE role.
SQL> @plustrace.sql
TBR-7070: Specified role 'PLUSTRACE' was not found.
Role 'PLUSTRACE' created.
Granted.
Granted.
Granted.
Granted.
File finished.
-- Run an SQL query to check the SQL execution plan of the query.
SQL> SELECT AVG(SALARY) AVG FROM employee GROUP BY DEPT_CD;
AVG
----------
4666.66667
1 row selected.
20. Tuning and Monitoring
Better Technology, Better Tomorrow
20
SQL ID: 115
Plan Hash Value: 1910332783
Execution Plan
--------------------------------------------------------------------------------
1 GROUP BY (HASH) (Cost:26, %CPU:0, Rows:1)
2 TABLE ACCESS (FULL): EMPLOYEE (Cost:26, %CPU:0, Rows:3)