The document discusses analyzing the system capacity of the database and middle tiers for an Oracle E-Business Suite environment. It covers various statistical methods for analyzing the database tier capacity, including simple math models using CPU and memory metrics, linear regression analysis of logical reads versus CPU utilization, and queuing theory models. It also provides recommendations for analyzing the middle tier, such as checking the application server access logs for errors, tuning JDBC settings, sizing the concurrent managers correctly, and analyzing long-running concurrent programs. The document aims to help understand if the system is properly sized to serve the workload by applying these different analytical techniques.
The document discusses analyzing database systems using a 3D method for performance analysis. It introduces the 3D method, which looks at performance from the perspectives of the operating system (OS), Oracle database, and applications. The 3D method provides a holistic view of the system that can help identify issues and direct solutions. It also covers topics like time-based analysis in Oracle, how wait events are classified, and having a diagnostic framework for quick troubleshooting using tools like the Automatic Workload Repository report.
The document summarizes different statistical methods for analyzing the capacity of an Oracle database server:
Simple math models can analyze single metrics like CPU usage but have low precision. Linear regression analysis uses logical I/O to predict CPU utilization and found a database could handle 2000% more workload. Queuing theory models CPU and I/O as queues and uses Erlang C formulas to forecast capacity and scalability in a more precise way than simple models. The document provides examples of applying these methods to capacity analysis.
Create your oracle_apps_r12_lab_with_less_than_us1000Ajith Narayanan
This document summarizes a presentation on how to create an Oracle Apps R12 lab with less than $1000. It discusses designing a multi-tier architecture for Oracle Apps R12 on a Linux platform using inexpensive hardware. Specifically, it describes how to set up 5 Dell desktops running Oracle Linux and connected via switches to act as nodes, with a NAS storage device providing shared storage between the nodes. Software components like Oracle Grid Infrastructure, Oracle Database, and Oracle E-Business Suite can then be installed to implement the multi-tier RAC configuration. The presentation provides step-by-step instructions for tasks like preparing the shared storage, installing the various Oracle software components, and configuring the applications tier to use the RAC database.
Crack the complexity of oracle applications r12 workload v2Ajith Narayanan
This document summarizes Ajith Narayanan's presentation on characterizing Oracle Applications R12 workload. The presentation covers instrumentation, collection, classification, measurement, and interpretation of workload data. The goal is to understand workload trends and impacts in order to optimize system resources and performance. Key aspects discussed include identifying workload classes like forms, batches, and self-service applications; measuring resource usage correlated to workload; and using interpretations to make scheduling and tuning decisions.
Ajith Narayanan presented methods for analyzing the capacity of Oracle E-Business Suite databases and middle tiers. He discussed using simple math to analyze CPU and memory usage. Linear regression analysis can model relationships between variables like logical reads and CPU utilization. Queuing theory and Erlang C forecasting can provide a baseline capacity and predict scalability. The presentation also covered checking access logs, JDBC settings, sizing concurrent managers correctly, analyzing concurrent programs, and ensuring JVM settings are optimized.
Best practices for large oracle apps r12 implementations apps14Ajith Narayanan
This document provides an agenda and overview for a presentation on best practices for large Oracle Applications R12 implementations. The agenda covers application tier topics like Forms, response time, concurrent processing, and workflow. It also covers database tier topics such as AWR and ADDR reports, ORACHK health checks, and cluster callout scripts. The goal is to review strategies for ensuring performance, scalability, and proactive issue identification.
Best practices for_large_oracle_apps_r12_implementationsAjith Narayanan
This document provides an agenda and overview for a presentation on best practices for large Oracle Applications R12 implementations. The agenda covers application tier topics like technology stack, Forms, response time issues, and patching. It also covers database tier topics like CPU utilization, I/O, diagnostics, SQL tuning, and using Oracle RAC. The goal is to make the environment proactive, simple to manage, and reduce support/maintenance costs through established best practices.
An introduction to_rac_system_test_planning_methodsAjith Narayanan
This document provides an overview and agenda for testing an Oracle Real Application Clusters (RAC) system. It outlines 10 tests to validate that the RAC system is installed and configured correctly, and to verify basic functionality and the system's ability to achieve high availability and performance objectives. The tests include planned node reboots, unplanned node failures, instance failures, and network failures. Metrics like failover time, recovery time, and downtime are proposed to measure success.
The document discusses analyzing database systems using a 3D method for performance analysis. It introduces the 3D method, which looks at performance from the perspectives of the operating system (OS), Oracle database, and applications. The 3D method provides a holistic view of the system that can help identify issues and direct solutions. It also covers topics like time-based analysis in Oracle, how wait events are classified, and having a diagnostic framework for quick troubleshooting using tools like the Automatic Workload Repository report.
The document summarizes different statistical methods for analyzing the capacity of an Oracle database server:
Simple math models can analyze single metrics like CPU usage but have low precision. Linear regression analysis uses logical I/O to predict CPU utilization and found a database could handle 2000% more workload. Queuing theory models CPU and I/O as queues and uses Erlang C formulas to forecast capacity and scalability in a more precise way than simple models. The document provides examples of applying these methods to capacity analysis.
Create your oracle_apps_r12_lab_with_less_than_us1000Ajith Narayanan
This document summarizes a presentation on how to create an Oracle Apps R12 lab with less than $1000. It discusses designing a multi-tier architecture for Oracle Apps R12 on a Linux platform using inexpensive hardware. Specifically, it describes how to set up 5 Dell desktops running Oracle Linux and connected via switches to act as nodes, with a NAS storage device providing shared storage between the nodes. Software components like Oracle Grid Infrastructure, Oracle Database, and Oracle E-Business Suite can then be installed to implement the multi-tier RAC configuration. The presentation provides step-by-step instructions for tasks like preparing the shared storage, installing the various Oracle software components, and configuring the applications tier to use the RAC database.
Crack the complexity of oracle applications r12 workload v2Ajith Narayanan
This document summarizes Ajith Narayanan's presentation on characterizing Oracle Applications R12 workload. The presentation covers instrumentation, collection, classification, measurement, and interpretation of workload data. The goal is to understand workload trends and impacts in order to optimize system resources and performance. Key aspects discussed include identifying workload classes like forms, batches, and self-service applications; measuring resource usage correlated to workload; and using interpretations to make scheduling and tuning decisions.
Ajith Narayanan presented methods for analyzing the capacity of Oracle E-Business Suite databases and middle tiers. He discussed using simple math to analyze CPU and memory usage. Linear regression analysis can model relationships between variables like logical reads and CPU utilization. Queuing theory and Erlang C forecasting can provide a baseline capacity and predict scalability. The presentation also covered checking access logs, JDBC settings, sizing concurrent managers correctly, analyzing concurrent programs, and ensuring JVM settings are optimized.
Best practices for large oracle apps r12 implementations apps14Ajith Narayanan
This document provides an agenda and overview for a presentation on best practices for large Oracle Applications R12 implementations. The agenda covers application tier topics like Forms, response time, concurrent processing, and workflow. It also covers database tier topics such as AWR and ADDR reports, ORACHK health checks, and cluster callout scripts. The goal is to review strategies for ensuring performance, scalability, and proactive issue identification.
Best practices for_large_oracle_apps_r12_implementationsAjith Narayanan
This document provides an agenda and overview for a presentation on best practices for large Oracle Applications R12 implementations. The agenda covers application tier topics like technology stack, Forms, response time issues, and patching. It also covers database tier topics like CPU utilization, I/O, diagnostics, SQL tuning, and using Oracle RAC. The goal is to make the environment proactive, simple to manage, and reduce support/maintenance costs through established best practices.
An introduction to_rac_system_test_planning_methodsAjith Narayanan
This document provides an overview and agenda for testing an Oracle Real Application Clusters (RAC) system. It outlines 10 tests to validate that the RAC system is installed and configured correctly, and to verify basic functionality and the system's ability to achieve high availability and performance objectives. The tests include planned node reboots, unplanned node failures, instance failures, and network failures. Metrics like failover time, recovery time, and downtime are proposed to measure success.
Oracle real application clusters system tests with demoAjith Narayanan
This document provides details on testing Oracle Real Application Clusters functionality through a series of tests. It begins with an introduction and agenda, then describes 10 tests to validate high availability features including planned and unplanned node/instance failures, network failures, and service failover. Expected results and measures of success are outlined for each test. Sample scripts are also provided.
This document provides an overview and interpretation of the Automatic Workload Repository (AWR) report in Oracle database. Some key points:
- AWR collects snapshots of database metrics and performance data every 60 minutes by default and retains them for 7 days. This data is used by tools like ADDM for self-management and diagnosing issues.
- The top timed waits in the AWR report usually indicate where to focus tuning efforts. Common waits include I/O waits, buffer busy waits, and enqueue waits.
- Other useful AWR metrics include parse/execute ratios, wait event distributions, and top activities to identify bottlenecks like parsing overhead, locking issues, or inefficient SQL.
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...Kristofferson A
The document discusses mining the Automatic Workload Repository (AWR) in Oracle databases for capacity planning, visualization, and other real-world uses. It introduces Karl Arao as a speaker and discusses topics he will cover including AWR, diagnosing performance issues using AWR data, visualization of AWR data, capacity planning, and tools for working with AWR data like scripts and linear regression. References and resources on working with AWR are also provided.
This document outlines 10 vital tips for optimizing Oracle Real Application Clusters (RAC) performance. The tips include: 1) properly sizing capacity and architecture based on hardware components and estimated database sizes; 2) tuning SQL and parallel query performance through techniques like partitioning, parallelism, and reducing full table scans; 3) additional tuning of the database, network, recovery processes, global cache, storage, and Clusterware can further optimize RAC performance.
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentalsJohn Beresniewicz
RMOUG 2020 abstract:
This session will cover core concepts for Oracle performance analysis first introduced in Oracle 10g and forming the backbone of many features in the Diagnostic and Tuning packs. The presentation will cover the theoretical basis and meaning of these concepts, as well as illustrate how they are fundamental to many user-facing features in both the database itself and Enterprise Manager.
The document discusses best practices for gathering statistics in Oracle databases. It covers how to gather statistics using the DBMS_STATS package, additional types of statistics like column groups and expression statistics, when to gather statistics such as after data loads, and how to improve statistics gathering performance using parallel execution and incremental gathering for partitioned tables.
This is the presentation on ASH that I did with Graham Wood at RMOUG 2014 and that represents the final best effort to capture essential and advanced ASH content as started in a presentation Uri Shaft and I gave at a small conference in Denmark sometime in 2012 perhaps. The presentation is also available publicly through the RMOUG website, so I felt at liberty to post it myself here. If it disappears it would likely be because I have been asked to remove it by Oracle.
The document discusses Oracle database performance tuning. It covers reactive and proactive performance tuning, the top-down tuning methodology, common types of performance issues, and metrics for measuring performance such as response time and throughput. It also compares online transaction processing (OLTP) systems and data warehouses (DW), and describes different architectures for integrating OLTP and DW systems.
This document provides an overview of Oracle performance tuning. It discusses Oracle architecture including processes, wait events, dynamic views, and tools for performance analysis like Statspack, AWR, Enterprise Manager, SQL tracing and tuning. Key aspects of performance tuning covered include statistics, hints, query rewrite, indexing, and application-level tuning for OLTP workloads.
This document provides an overview of how to use various Oracle performance monitoring and diagnostic tools like ASH, AWR, and SQL Monitor to analyze and troubleshoot performance issues. It begins with introductions and background on the speaker. It then demonstrates how to generate and interpret reports from these tools using the Oracle Enterprise Manager console and command line. It provides examples of querying ASH data directly and using tools like Compare ADDM and SQL Monitor. The document aims to help users quickly understand performance problems by leveraging these built-in Oracle performance diagnostics.
Your tuning arsenal: AWR, ADDM, ASH, Metrics and AdvisorsJohn Kanagaraj
Oracle Database 10g brought in a slew of tuning and performance related tools and indeed a new way of dealing with performance issues. Even though 10g has been around for a while, many DBAs haven’t really used many of the new features, mostly because they are not well known or understood. In this Expert session, we will look past the slick demos of the new tuning and performance related tools and go “under the hood”. Using this knowledge, we will bypass the GUI and look at the views and counters that matter and quickly understand what they are saying. Tools covered include AWR, ADDM, ASH, Metrics, Tuning Advisors and their related views. Much of information about Oracle Database 10g presented in this paper has been adapted from my book and I acknowledge that with gratitude to my publisher - SAMS (Pearson).
Tuning SQL for Oracle Exadata: The Good, The Bad, and The Ugly Tuning SQL fo...Enkitec
This document discusses tuning SQL on Oracle Exadata. It makes three main points:
1. Gathering and displaying execution plan data differs slightly on Exadata compared to non-Exadata databases.
2. The general approach to optimization is similar to non-Exadata, focusing on features like smart scans, storage indexes, and parallelism that provide the most benefit.
3. Rewriting SQL queries can have a dramatic impact on performance by enabling offloading and reducing disk I/O, with examples showing savings of over 98% in run time.
Whitepaper: Exadata Consolidation Success StoryKristofferson A
1. The document discusses database and server consolidation using Oracle Exadata and describes the challenges of managing highly consolidated environments to ensure quality of service.
2. It outlines a 4-step process for accurate provisioning and capacity planning using a tool called the Provisioning Worksheet: collecting database details, defining the target Exadata hardware capacity, creating a provisioning plan, and reviewing resource utilization.
3. The process relies on basic capacity planning to ensure workload requirements fit available capacity. Database CPU and storage requirements are gathered, a target Exadata configuration is set, databases are mapped to nodes in the plan, and final utilization is summarized to identify any capacity shortfalls.
The document discusses Oracle Database performance tuning. It begins by defining performance as the accepted throughput for a given workload. Performance tuning is defined as optimizing resource use to increase throughput and minimize contention. A performance problem occurs when database tasks do not complete in a timely manner, such as SQL running longer than usual or users facing slowness. Performance problems can be caused by contention for resources, overutilization of the system, or poorly written SQL. The document discusses various performance diagnostics tools and concepts like wait events, enqueues, I/O performance, and provides examples of how to analyze issues related to these areas.
Trivadis TechEvent 2016 Oracle Client Failover - Under the Hood by Robert BialekTrivadis
1) The document discusses various techniques for Oracle client failover including operating system level settings like TCP timeouts and virtual IP addresses, as well as Oracle-specific solutions involving database services, Transparent Application Failover (TAF), Fast Application Notification (FAN), and Fast Connection Failover (FCF).
2) It provides an overview of how client failover depends on factors like the Oracle client and database versions, database configuration, network topology, and operating system.
3) The techniques addressed range from basic operating system settings to more advanced Oracle features like database services, TAF, FAN, and FCF that provide automated client failover capabilities.
The document discusses Oracle database performance tuning. It covers identifying and resolving performance issues through tools like AWR and ASH reports. Common causes of performance problems include wait events, old statistics, incorrect execution plans, and I/O issues. The document recommends collecting specific data when analyzing problems and provides references and scripts for further tuning tasks.
The document discusses using Statspack and AWR (Automatic Workload Repository) to analyze SQL performance and identify poorly performing queries. It provides examples of Statspack reports and how to interpret them to find SQL statements that are doing full table scans, experiencing buffer cache misses, or are inefficient due to lack of bind variables. The document also discusses how to identify SQL statements that are causing excessive sorting.
OOUG - Oracle Performance Tuning with AASKyle Hailey
The document provides information about Kyle Hailey and his work in Oracle performance tuning. It includes details about his career history working with Oracle from 1990 to present, including roles in support, porting software versions, benchmarking, and performance tuning. Graphics and clear visualizations are emphasized as important tools for effectively communicating complex technical information and identifying problems. The goal is stated as simplifying database tuning information to empower database administrators.
The document describes Active Session History (ASH), a new methodology for performance tuning introduced by Kyle Hailey. ASH simplifies performance tuning by using statistical sampling to provide a multidimensional view of sessions, SQL, objects, users, and other database components over time. This allows identification of top resource consumers using less data collection than traditional methods.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
DESIGNED DYNAMIC SEGMENTED LRU AND MODIFIED MOESI PROTOCOL FOR RING CONNECTED...Ilango Jeyasubramanian
This document proposes and evaluates several cache designs for improving performance and energy efficiency in multi-core processors. It introduces a filter cache shared among cores to reduce energy consumption. It then implements a segmented least recently used replacement policy and adaptive bypassing to further improve cache hit rates. Finally, it modifies the MOESI coherence protocol for a ring interconnect topology to address data coherence across cores. Simulations show the proposed cache designs reduce energy usage by 11% and increase cache hit rates by up to 7% compared to baseline designs.
This document provides an introduction to basic techniques for collecting and analyzing Oracle performance data. It discusses developing a clear problem statement, defining important performance metrics, collecting a minimal set of necessary data, ensuring high quality data collection, and techniques for gathering different types of Oracle performance data using tools like STATSPACK, AWR, and tracing. The document emphasizes the importance of focusing data collection on relevant time periods and correlating OS and database statistics.
Oracle real application clusters system tests with demoAjith Narayanan
This document provides details on testing Oracle Real Application Clusters functionality through a series of tests. It begins with an introduction and agenda, then describes 10 tests to validate high availability features including planned and unplanned node/instance failures, network failures, and service failover. Expected results and measures of success are outlined for each test. Sample scripts are also provided.
This document provides an overview and interpretation of the Automatic Workload Repository (AWR) report in Oracle database. Some key points:
- AWR collects snapshots of database metrics and performance data every 60 minutes by default and retains them for 7 days. This data is used by tools like ADDM for self-management and diagnosing issues.
- The top timed waits in the AWR report usually indicate where to focus tuning efforts. Common waits include I/O waits, buffer busy waits, and enqueue waits.
- Other useful AWR metrics include parse/execute ratios, wait event distributions, and top activities to identify bottlenecks like parsing overhead, locking issues, or inefficient SQL.
Hotsos 2011: Mining the AWR repository for Capacity Planning, Visualization, ...Kristofferson A
The document discusses mining the Automatic Workload Repository (AWR) in Oracle databases for capacity planning, visualization, and other real-world uses. It introduces Karl Arao as a speaker and discusses topics he will cover including AWR, diagnosing performance issues using AWR data, visualization of AWR data, capacity planning, and tools for working with AWR data like scripts and linear regression. References and resources on working with AWR are also provided.
This document outlines 10 vital tips for optimizing Oracle Real Application Clusters (RAC) performance. The tips include: 1) properly sizing capacity and architecture based on hardware components and estimated database sizes; 2) tuning SQL and parallel query performance through techniques like partitioning, parallelism, and reducing full table scans; 3) additional tuning of the database, network, recovery processes, global cache, storage, and Clusterware can further optimize RAC performance.
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentalsJohn Beresniewicz
RMOUG 2020 abstract:
This session will cover core concepts for Oracle performance analysis first introduced in Oracle 10g and forming the backbone of many features in the Diagnostic and Tuning packs. The presentation will cover the theoretical basis and meaning of these concepts, as well as illustrate how they are fundamental to many user-facing features in both the database itself and Enterprise Manager.
The document discusses best practices for gathering statistics in Oracle databases. It covers how to gather statistics using the DBMS_STATS package, additional types of statistics like column groups and expression statistics, when to gather statistics such as after data loads, and how to improve statistics gathering performance using parallel execution and incremental gathering for partitioned tables.
This is the presentation on ASH that I did with Graham Wood at RMOUG 2014 and that represents the final best effort to capture essential and advanced ASH content as started in a presentation Uri Shaft and I gave at a small conference in Denmark sometime in 2012 perhaps. The presentation is also available publicly through the RMOUG website, so I felt at liberty to post it myself here. If it disappears it would likely be because I have been asked to remove it by Oracle.
The document discusses Oracle database performance tuning. It covers reactive and proactive performance tuning, the top-down tuning methodology, common types of performance issues, and metrics for measuring performance such as response time and throughput. It also compares online transaction processing (OLTP) systems and data warehouses (DW), and describes different architectures for integrating OLTP and DW systems.
This document provides an overview of Oracle performance tuning. It discusses Oracle architecture including processes, wait events, dynamic views, and tools for performance analysis like Statspack, AWR, Enterprise Manager, SQL tracing and tuning. Key aspects of performance tuning covered include statistics, hints, query rewrite, indexing, and application-level tuning for OLTP workloads.
This document provides an overview of how to use various Oracle performance monitoring and diagnostic tools like ASH, AWR, and SQL Monitor to analyze and troubleshoot performance issues. It begins with introductions and background on the speaker. It then demonstrates how to generate and interpret reports from these tools using the Oracle Enterprise Manager console and command line. It provides examples of querying ASH data directly and using tools like Compare ADDM and SQL Monitor. The document aims to help users quickly understand performance problems by leveraging these built-in Oracle performance diagnostics.
Your tuning arsenal: AWR, ADDM, ASH, Metrics and AdvisorsJohn Kanagaraj
Oracle Database 10g brought in a slew of tuning and performance related tools and indeed a new way of dealing with performance issues. Even though 10g has been around for a while, many DBAs haven’t really used many of the new features, mostly because they are not well known or understood. In this Expert session, we will look past the slick demos of the new tuning and performance related tools and go “under the hood”. Using this knowledge, we will bypass the GUI and look at the views and counters that matter and quickly understand what they are saying. Tools covered include AWR, ADDM, ASH, Metrics, Tuning Advisors and their related views. Much of information about Oracle Database 10g presented in this paper has been adapted from my book and I acknowledge that with gratitude to my publisher - SAMS (Pearson).
Tuning SQL for Oracle Exadata: The Good, The Bad, and The Ugly Tuning SQL fo...Enkitec
This document discusses tuning SQL on Oracle Exadata. It makes three main points:
1. Gathering and displaying execution plan data differs slightly on Exadata compared to non-Exadata databases.
2. The general approach to optimization is similar to non-Exadata, focusing on features like smart scans, storage indexes, and parallelism that provide the most benefit.
3. Rewriting SQL queries can have a dramatic impact on performance by enabling offloading and reducing disk I/O, with examples showing savings of over 98% in run time.
Whitepaper: Exadata Consolidation Success StoryKristofferson A
1. The document discusses database and server consolidation using Oracle Exadata and describes the challenges of managing highly consolidated environments to ensure quality of service.
2. It outlines a 4-step process for accurate provisioning and capacity planning using a tool called the Provisioning Worksheet: collecting database details, defining the target Exadata hardware capacity, creating a provisioning plan, and reviewing resource utilization.
3. The process relies on basic capacity planning to ensure workload requirements fit available capacity. Database CPU and storage requirements are gathered, a target Exadata configuration is set, databases are mapped to nodes in the plan, and final utilization is summarized to identify any capacity shortfalls.
The document discusses Oracle Database performance tuning. It begins by defining performance as the accepted throughput for a given workload. Performance tuning is defined as optimizing resource use to increase throughput and minimize contention. A performance problem occurs when database tasks do not complete in a timely manner, such as SQL running longer than usual or users facing slowness. Performance problems can be caused by contention for resources, overutilization of the system, or poorly written SQL. The document discusses various performance diagnostics tools and concepts like wait events, enqueues, I/O performance, and provides examples of how to analyze issues related to these areas.
Trivadis TechEvent 2016 Oracle Client Failover - Under the Hood by Robert BialekTrivadis
1) The document discusses various techniques for Oracle client failover including operating system level settings like TCP timeouts and virtual IP addresses, as well as Oracle-specific solutions involving database services, Transparent Application Failover (TAF), Fast Application Notification (FAN), and Fast Connection Failover (FCF).
2) It provides an overview of how client failover depends on factors like the Oracle client and database versions, database configuration, network topology, and operating system.
3) The techniques addressed range from basic operating system settings to more advanced Oracle features like database services, TAF, FAN, and FCF that provide automated client failover capabilities.
The document discusses Oracle database performance tuning. It covers identifying and resolving performance issues through tools like AWR and ASH reports. Common causes of performance problems include wait events, old statistics, incorrect execution plans, and I/O issues. The document recommends collecting specific data when analyzing problems and provides references and scripts for further tuning tasks.
The document discusses using Statspack and AWR (Automatic Workload Repository) to analyze SQL performance and identify poorly performing queries. It provides examples of Statspack reports and how to interpret them to find SQL statements that are doing full table scans, experiencing buffer cache misses, or are inefficient due to lack of bind variables. The document also discusses how to identify SQL statements that are causing excessive sorting.
OOUG - Oracle Performance Tuning with AASKyle Hailey
The document provides information about Kyle Hailey and his work in Oracle performance tuning. It includes details about his career history working with Oracle from 1990 to present, including roles in support, porting software versions, benchmarking, and performance tuning. Graphics and clear visualizations are emphasized as important tools for effectively communicating complex technical information and identifying problems. The goal is stated as simplifying database tuning information to empower database administrators.
The document describes Active Session History (ASH), a new methodology for performance tuning introduced by Kyle Hailey. ASH simplifies performance tuning by using statistical sampling to provide a multidimensional view of sessions, SQL, objects, users, and other database components over time. This allows identification of top resource consumers using less data collection than traditional methods.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
DESIGNED DYNAMIC SEGMENTED LRU AND MODIFIED MOESI PROTOCOL FOR RING CONNECTED...Ilango Jeyasubramanian
This document proposes and evaluates several cache designs for improving performance and energy efficiency in multi-core processors. It introduces a filter cache shared among cores to reduce energy consumption. It then implements a segmented least recently used replacement policy and adaptive bypassing to further improve cache hit rates. Finally, it modifies the MOESI coherence protocol for a ring interconnect topology to address data coherence across cores. Simulations show the proposed cache designs reduce energy usage by 11% and increase cache hit rates by up to 7% compared to baseline designs.
This document provides an introduction to basic techniques for collecting and analyzing Oracle performance data. It discusses developing a clear problem statement, defining important performance metrics, collecting a minimal set of necessary data, ensuring high quality data collection, and techniques for gathering different types of Oracle performance data using tools like STATSPACK, AWR, and tracing. The document emphasizes the importance of focusing data collection on relevant time periods and correlating OS and database statistics.
The document discusses various performance tuning concepts in Oracle including CPU time, database time, reading Statspack/AWR reports, parse CPU to parse elapsed ratio, execute to parse ratio, latches, and wait events. It provides explanations and examples for each concept to help understand how to analyze performance issues from monitoring reports.
Antes de migrar de 10g a 11g o 12c, tome en cuenta las siguientes consideraciones. No es tan sencillo como simplemente cambiar de motor de base de datos, se necesita hacer consideraciones a nivel del aplicativo.
The document discusses monitoring and tuning Oracle databases on z/OS and z/Linux systems. It provides an overview of using Statspack to diagnose performance issues from high CPU usage, I/O utilization, or memory usage based on timed events, SQL statements, and tablespace I/O statistics. Potential causes and remedies are described for each area that could lead to bad response times.
The document discusses tuning SQL queries in Oracle databases. It begins by noting that while tools can help, there is no single process for tuning every query as each case depends on factors like the schema design, data distribution and how the optimizer chooses a plan. The document then provides a methodology for investigating and tuning a query with poor performance, including getting the execution plan, checking it visually, and identifying possible causes like stale statistics, missing indexes or inefficient SQL.
- Oracle Database 11g Release 2 provides many advanced features to lower IT costs including in-memory processing, automated storage management, database compression, and real application testing capabilities.
- It allows for online application upgrades using edition-based redefinition which allows new code and data changes to be installed without disrupting the existing system.
- Oracle provides multiple upgrade paths from prior database versions to 11g to allow for predictable performance and a safe upgrade process.
Getting optimal performance from oracle e-business suite presentationBerry Clemens
The document provides guidance on optimizing performance of the Oracle E-Business Suite applications tier. It recommends staying current with the latest release updates and family packs. It also provides tips on optimizing logging settings, workflow processes, Forms processes, JVM processes, and sizing the middle tier for concurrency. Specific recommendations include purging workflow runtime data, translating workflow activity function calls, disabling workflow queue retention, and sizing JVM heaps and Forms memory based on formulas provided.
Ben Prusinski is presenting on Oracle R12 E-Business Suite performance tuning. He will cover methodology, best practices, and techniques from basic to advanced. The presentation includes tuning at the infrastructure, application, and database levels with a focus on a holistic approach. Specific areas that will be discussed are concurrent manager tuning including queue size, sleep cycle, cache size, and number of processes.
The document discusses troubleshooting performance issues for SQL Server. It begins with an introduction and case study on the MS Society of Canada's website. It then discusses optimizing the environment, using Performance Monitor (PerfMon) to monitor performance, and concludes with recommendations to address issues like high CPU usage, slow disk speeds, and insufficient memory.
In Memory Database In Action by Tanel Poder and Kerry OsborneEnkitec
The document discusses Oracle Database In-Memory option and how it improves performance of data retrieval and processing queries. It provides examples of running a simple aggregation query with and without various performance features like In-Memory, vector processing and bloom filters enabled. Enabling these features reduces query elapsed time from 17 seconds to just 3 seconds by minimizing disk I/O and leveraging CPU optimizations like SIMD vector processing.
Oracle Database In-Memory Option in ActionTanel Poder
The document discusses Oracle Database In-Memory option and how it improves performance of data retrieval and processing queries. It provides examples of running a simple aggregation query with and without various performance features like In-Memory, vector processing and bloom filters enabled. Enabling these features reduces query elapsed time from 17 seconds to just 3 seconds by minimizing disk I/O and leveraging CPU optimizations like SIMD vector processing.
Jugal Shah has over 14 years of experience in IT working in roles such as manager, solution architect, DBA, developer and software engineer. He has worked extensively with database technologies including SQL Server, MySQL, PostgreSQL and others. He has received the MVP award from Microsoft for SQL Server in multiple years. Common causes of SQL Server performance problems include configuration issues, design problems, bottlenecks and poorly written queries or code. Various tools can be used to diagnose issues including dynamic management views, Performance Monitor, SQL Server Profiler and DBCC commands.
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docxfaithxdunce63732
This document summarizes the results of simulations run to analyze the performance of different processor configurations with varying levels of instruction-level parallelism. The key findings are:
1) For processors with significant memory latency, there is little performance difference between simple in-order and more complex out-of-order designs, as memory latency dominates execution time.
2) Supporting just two concurrently pending instructions provides most of the benefit of more complex out-of-order execution, while greatly reducing hardware complexity.
3) As the mismatch between processor and memory system performance increases, all designs see similar performance, regardless of the level of instruction-level parallelism exploited.
Alex Smola, Professor in the Machine Learning Department, Carnegie Mellon Uni...MLconf
Fast, Cheap and Deep – Scaling Machine Learning: Distributed high throughput machine learning is both a challenge and a key enabling technology. Using a Parameter Server template we are able to distribute algorithms efficiently over multiple GPUs and in the cloud. This allows us to design very fast recommender systems, factorization machines, classifiers, and deep networks. This degree of scalability allows us to tackle computationally expensive problems efficiently, yielding excellent results e.g. in visual question answering.
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...Aaron Shilo
The document provides an overview of Oracle database performance tuning best practices for DBAs and developers. It discusses the connection between SQL tuning and instance tuning, and how tuning both the database and SQL statements is important. It also covers the connection between the database and operating system, how features like data integrity and zero downtime updates are important. The presentation agenda includes topics like identifying bottlenecks, benchmarking, optimization techniques, the cost-based optimizer, indexes, and more.
This document provides 9 hints for optimizing Oracle database performance:
1. Take a methodical and empirical approach to tuning by focusing on root causes, measuring performance before and after changes, and avoiding "silver bullets".
2. Design databases and applications with performance in mind from the beginning.
3. Index wisely by only creating useful indexes that improve performance without excessive overhead.
4. Leverage built-in Oracle tools like DBMS_XPLAN and SQL Trace to measure performance.
5. Tune the optimizer by adjusting parameters and statistics to encourage better execution plans.
6. Focus SQL and PL/SQL tuning on problem queries, joins, sorts, and DML statements.
7. Address
This document provides an overview of Automatic Workload Repository (AWR) and Active Session History (ASH) in Oracle databases. AWR collects workload statistics and creates snapshots of the database to be used for performance monitoring. ASH samples active database sessions every second. Together, AWR and ASH provide historical performance data that can be queried and analyzed. The document discusses how to generate and interpret various AWR and ASH reports to identify top SQL, sessions, waits, and troubleshoot performance issues.
Static Load Balancing of Parallel Mining Efficient Algorithm with PBEC in Fre...IRJET Journal
This document presents a proposed static load balancing method for parallel mining of frequent sequences. It partitions the set of all frequent sequences into disjoint prefix-based equivalence classes (PBECs). It estimates the relative execution time of each PBEC using the sequential PrefixSpan algorithm on sampling data sets. The PBECs are then assigned to processors based on these estimated execution times to balance the computational load. The goal is to develop an efficient parallel frequent sequence mining algorithm that can scale to large datasets through static load balancing of computations across multiple processors.
Similar to Why is my_oracle_e-biz_database_slow_a_million_dollar_question (20)
The importance of sustainable and efficient computational practices in artificial intelligence (AI) and deep learning has become increasingly critical. This webinar focuses on the intersection of sustainability and AI, highlighting the significance of energy-efficient deep learning, innovative randomization techniques in neural networks, the potential of reservoir computing, and the cutting-edge realm of neuromorphic computing. This webinar aims to connect theoretical knowledge with practical applications and provide insights into how these innovative approaches can lead to more robust, efficient, and environmentally conscious AI systems.
Webinar Speaker: Prof. Claudio Gallicchio, Assistant Professor, University of Pisa
Claudio Gallicchio is an Assistant Professor at the Department of Computer Science of the University of Pisa, Italy. His research involves merging concepts from Deep Learning, Dynamical Systems, and Randomized Neural Systems, and he has co-authored over 100 scientific publications on the subject. He is the founder of the IEEE CIS Task Force on Reservoir Computing, and the co-founder and chair of the IEEE Task Force on Randomization-based Neural Networks and Learning Systems. He is an associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS).
Gamify it until you make it Improving Agile Development and Operations with ...Ben Linders
So many challenges, so little time. While we’re busy developing software and keeping it operational, we also need to sharpen the saw, but how? Gamification can be a way to look at how you’re doing and find out where to improve. It’s a great way to have everyone involved and get the best out of people.
In this presentation, Ben Linders will show how playing games with the DevOps coaching cards can help to explore your current development and deployment (DevOps) practices and decide as a team what to improve or experiment with.
The games that we play are based on an engagement model. Instead of imposing change, the games enable people to pull in ideas for change and apply those in a way that best suits their collective needs.
By playing games, you can learn from each other. Teams can use games, exercises, and coaching cards to discuss values, principles, and practices, and share their experiences and learnings.
Different game formats can be used to share experiences on DevOps principles and practices and explore how they can be applied effectively. This presentation provides an overview of playing formats and will inspire you to come up with your own formats.
This presentation by Tim Capel, Director of the UK Information Commissioner’s Office Legal Service, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
1.) Introduction
Our Movement is not new; it is the same as it was for Freedom, Justice, and Equality since we were labeled as slaves. However, this movement at its core must entail economics.
2.) Historical Context
This is the same movement because none of the previous movements, such as boycotts, were ever completed. For some, maybe, but for the most part, it’s just a place to keep your stable until you’re ready to assimilate them into your system. The rest of the crabs are left in the world’s worst parts, begging for scraps.
3.) Economic Empowerment
Our Movement aims to show that it is indeed possible for the less fortunate to establish their economic system. Everyone else – Caucasian, Asian, Mexican, Israeli, Jews, etc. – has their systems, and they all set up and usurp money from the less fortunate. So, the less fortunate buy from every one of them, yet none of them buy from the less fortunate. Moreover, the less fortunate really don’t have anything to sell.
4.) Collaboration with Organizations
Our Movement will demonstrate how organizations such as the National Association for the Advancement of Colored People, National Urban League, Black Lives Matter, and others can assist in creating a much more indestructible Black Wall Street.
5.) Vision for the Future
Our Movement will not settle for less than those who came before us and stopped before the rights were equal. The economy, jobs, healthcare, education, housing, incarceration – everything is unfair, and what isn’t is rigged for the less fortunate to fail, as evidenced in society.
6.) Call to Action
Our movement has started and implemented everything needed for the advancement of the economic system. There are positions for only those who understand the importance of this movement, as failure to address it will continue the degradation of the people deemed less fortunate.
No, this isn’t Noah’s Ark, nor am I a Prophet. I’m just a man who wrote a couple of books, created a magnificent website: http://www.thearkproject.llc, and who truly hopes to try and initiate a truly sustainable economic system for deprived people. We may not all have the same beliefs, but if our methods are tried, tested, and proven, we can come together and help others. My website: http://www.thearkproject.llc is very informative and considerably controversial. Please check it out, and if you are afraid, leave immediately; it’s no place for cowards. The last Prophet said: “Whoever among you sees an evil action, then let him change it with his hand [by taking action]; if he cannot, then with his tongue [by speaking out]; and if he cannot, then, with his heart – and that is the weakest of faith.” [Sahih Muslim] If we all, or even some of us, did this, there would be significant change. We are able to witness it on small and grand scales, for example, from climate control to business partnerships. I encourage, invite, and challenge you all to support me by visiting my website.
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdfBen Linders
Psychological safety in teams is important; team members must feel safe and able to communicate and collaborate effectively to deliver value. It’s also necessary to build long-lasting teams since things will happen and relationships will be strained.
But, how safe is a team? How can we determine if there are any factors that make the team unsafe or have an impact on the team’s culture?
In this mini-workshop, we’ll play games for psychological safety and team culture utilizing a deck of coaching cards, The Psychological Safety Cards. We will learn how to use gamification to gain a better understanding of what’s going on in teams. Individuals share what they have learned from working in teams, what has impacted the team’s safety and culture, and what has led to positive change.
Different game formats will be played in groups in parallel. Examples are an ice-breaker to get people talking about psychological safety, a constellation where people take positions about aspects of psychological safety in their team or organization, and collaborative card games where people work together to create an environment that fosters psychological safety.
This presentation by Professor Giuseppe Colangelo, Jean Monnet Professor of European Innovation Policy, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
• For a full set of 530+ questions. Go to
https://skillcertpro.com/product/servicenow-cis-itsm-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
This presentation by Katharine Kemp, Associate Professor at the Faculty of Law & Justice at UNSW Sydney, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
The Intersection between Competition and Data Privacy – KEMP – June 2024 OECD...
Why is my_oracle_e-biz_database_slow_a_million_dollar_question
1. Why Is My Oracle E-Biz Database Slow? A Million Dollar
Question.
Ajith Narayanan
Software Development Advisor, Dell IT
NZOUG-13 , Wellington, New Zealand, March 19th
2. Disclaimer
The views/contents in this slides are those of the author and do not necessarily
reflect that of Oracle Corporation and/or its affiliates/subsidiaries.
The material in this document is for informational purposes only and is published
with no guarantee or warranty, express or implied..
3. Who Am I?
Ajith Narayanan
Software Development Advisor
Dell IT
8+ years of Oracle DBA experience.
Blogger :- http://oracledbascriptsfromajith.blogspot.com
Website Chair:- http://www.oracleracsig.org – Oracle RAC SIG
4. Agenda
Why system capacity? Are you being served properly?
***Database Tier Analysis For Oracle E-Biz Suite***
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
What Are The Different Statistical Methods Of Analyzing The System Capacity Of E-Biz Database Tier
Simple Math – CPU & Memory Analysis.
Linear Regression Model for CPU.
Queuing Theory – CPU or I/O bound system.
***Middle-Tier Analysis For Oracle E-Biz Suite***
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Did You Check Your Access. log?
JDBC Settings?
Concurrent Managers, Are They Sized Correctly?
Concurrent Programs Analyzing
JVM Settings, Are They Sized Properly? Analyze Before Concluding
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Q&A~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5. Why System Capacity? Are You Being Served Properly?
How long can you wait to have your favorite
dish at your favorite restaurant ? Waiter says
you will have to wait for another 1 hour to be
served.
Imagine a angry waiter on your table, refusing
to serve you. (No Service)
Obviously, you are not happy !!
Any computer system serves you in similar
fashion if not properly sized.
Capacity Planning plays a vital role in such
situations.
Proper capacity analysis done by restaurant
owner could have helped the waiter serve you
your favorite dish faster .
7. What are the different statistical methods of analyzing the system
capacity of E Biz database tier?
Fundamental Forecasting Models
Simple Math - This model can take single component inputs either application or technical metrics. This method
is usually involved with short-duration projects, The precision is usually low, but sufficient when used appropriately.
Essential Forecasting Mathematics – This method can produce relatively precise forecasts. This is again a
single component input method using technical metrics. Can be used with short-duration projects.
Linear Regression Analysis – This method is typically used to determine how much of some business activity
can occur before the system runs out of gas.
Queuing Theory – This is basically an upgrade to essential forecasting mathematics, can be used for high
precision forecasting.
9. Simple Math - CPU
Scenario:- Forecast the CPU requirements for a 4-node 16 CPU Database
tier, The customer wants to downsize the infrastructure.
On Host1 , during peak utilization timings (1/5/09 3:00 AM- 4:00 AM), CPU utilization was 39.27%.
CPU consumption = 0.3927 X 14,400 s (per hour available capacity) = 5654.88s
Similarly, on Host2, Host3, Host4 observed Peak CPU consumption were 43.87% (Avg) 6173.28 s,
41.22 %(Avg) 5935.68 s and 53.11% (Avg) 7503.84s respectively.
Total CPU requirement = 25267.68 s/Hour
Calculate Utilization with 16 CPU’s
~~~~~~~~~~~~~~~~~~~~~~~~
Considering 10% overhead for OS, estimated CPU utilization on 8-core (Dell 1950 QC)
Physical Server = (25267.68 + 25267.68 *10%) / (16*60*60) = 0.4825425 = 48.25%
Calculate Utilization with 12 CPU’s
~~~~~~~~~~~~~~~~~~~~~~~~
Considering 10% overhead for OS , estimated CPU utilization on 4-core (Dell 1950 QC)
Physical Server = (25267.68 + 25267.68 *10%)/ (12*60*60) = 0.64339 = 64.33%
Calculate Utilization with 8 CPU’s
~~~~~~~~~~~~~~~~~~~~~~~~
Considering 10% overhead for OS , estimated CPU utilization on 4-core (Dell 1950 QC)
Physical Server = (25267.68 + 25267.68 *10%)/ (8*60*60) = 0.87735 = 87.73%
12. Linear Regression Analysis
WHAT IS LINEAR REGRESSION
========================
Linear regression analysis is a method for investigating relationships among variables. eg. Logical Rds Vs
CPU utilization
Relation is y=mx +c (equation of straight line), Here c represents the y-intercept of the line and m
represents the slope.
Here the variable "user calls" is used to predict the value of CPU Utilization .So, "user calls" becomes the
explanatory variable. On the other hand, the variable whose value is to be predicted is known as response
or dependent variable
Generally response variable is denoted as Y & predicted variable denoted as X
13. Linear Regression Analysis
Scenario:- Database capacity forecasting
Observations:
•CPU utilization is highly correlated with Logical Reads. Correlation coefficient 0.98 in (ideal 1.0) . Both the CPU usage & Logical
Reads trend line are similar. (Spikes are matching 98%)
14. Linear Regression Analysis
Scenario:- Database capacity forecasting
Observations:
• Current Capacity can handle additional 2000%
workload by maintaining CPU utilization at 38.62%
(Considering additional OS overhead etc.)
• Moreover, there is a downsizing possibility seen for
reducing the box capacity from 8 CPU to 4 CPU (As
indicated by rightmost column)
15. 0.00
5.00
10.00
15.00
20.00
0
500
1000
1500
2000
2500
3000
3500
4000
AvgLRD/Sec
Date
OS Memory utilization (In GB) vs # of user sessions (Host1)
Total Memory Util in
GB
Average 651.13 5.90 14.18 92.24%
Max 968.00 10.08 18.83 121.14%
Instance Name Snap ID Date # of User Sessions Swap Utilization PGA Utilization in GB
Total OS Memory
utilization (In GB)
Memory Util.
%
DB1 4288 9/3/2008 9:00 333 0 1.97 9.79 62.97%
DB1 4289 9/3/2008 9:30 544 0 4.45 12.58 80.92%
DB1 4290 9/3/2008 10:00 720 0 6.78 15.16 97.57%
DB1 4291 9/3/2008 10:14 888 0 8.84 17.47 112.41%
DB1 4292 9/3/2008 10:30 968 0 10.08 18.83 121.14%
DB1 4293 9/3/2008 11:00 915 0 9.42 18.09 116.40%
DB1 4294 9/3/2008 11:30 547 0 4.36 12.49 80.37%
DB1 4295 9/3/2008 12:00 294 0 1.32 9.08 58.42%
y = 6.8776e0.0011x
0
20
40
60
80
100
300 400 500 600 700 800 900 1000
GBMemUtil
# of user Sessions
Mem Util – Host1
Note:- There is bug in SAR memory
utilization data in Linux
Linear Regression Analysis
18. Queuing Theory
Queuing Theory is basically an upgrade to essential forecasting
mathematics. This method can do a baseline of the current capacity along
with its forecasting(Scalability/Downgrading possibility)
Erlang C Forecasting
===============
Agner Krarup Erlang(1878-1929) is the man behind this math. He studied the performance of
telephone networks.
When Erlang C function is used, we do not apply the essential forecasting response time formulas,
Instead, we have a single new queue time formula that can be applied to both CPU & I/O
subsystem.
For CPU subsystems, there is only one queue, So the entire system arrival rate is (λsys), But for IO
subsystems, the arrival rate at each queue (λq) is the system arrival rate(λsys) divided by the number of
I/O devices.
U= St λq Q= λq Qt Ec= Erlang(m, St, λq) Qt= Ec St
--------- -----------
m m(1-U)
21. Few More For Tips DB Tier Best Practices
BDE Check –
bde_chk_cbo.sql generates a spool file with all Apps relevant initialization parameters according to
official list on notes 216205.1 and 396009.1
Hugepage Sizing For Database Memory Locking
Consider Tuning Top Resource consuming SQL’s ( AWR), ADDM & ASH reports
Deep Dive Into Top Wait Events ( Makes Sure IDLE events, are reduced).
Want to know what is in trace file? (Use tvdxtat_40beta9 by Christian Antognini )
Want to have a quick review of AWR report ? ( Use Tyler Muth’s (http://tylermuth.wordpress.com)
to make AWR reports more readable)
23. Did You Check Your Access. log?
Log To Check : $LOG_HOME/ora/10.1.3/Apache/access_log*
• Status 500 (internal server error) may typically be seen for a JServ request and often means the JVM has
some kind of problem or has died.
For example: This entry may indicate that the JServ JVM is not responding to any requests:
192.168.1.10 - - [21/Jun/2006:13:25:30 +0100] "POST /oa_servlet/actions/processApplicantSearch HTTP/1.1" 500 0
• Status 403 (forbidden) could typically be seen for oprocmgr
For example: This entry in access_log may indicate a problem with system configuration (oprocmgr.conf):
requests and often means there is a misconfiguration that needs to be resolved.
192.168.1.10 - - [21/Jun/2006:13:25:30 +0100] "GET /oprocmgr-
service?cmd=Register&index=0&modName=JServ&grpName=OACoreGroup&port=16000 HTTP/1.1" 403 226
Run the below script to search for the above errors from access_log
## Start of script
## Check for HTTP statuses in 400 or 500 range for Jserv or PLSQL requests only
awk ' $9>=400 && $9<=599 { print $0 }' access_log*
grep -e "servlet" -e "/pls/"
grep -v .gif ## ## Check for requests taking more than 30 seconds to be returned ## awk ' $11>30 {print $0} ' access_log*
## This one is not an exception report, you need to manually check , Look for when the JVMs are restarting
grep "GET /oprocmgr-service?cmd=Register" access_log*
## End of script
24. JDBC Settings?
Change /db/appl/<contextname>/inst/apps/<context_hostname>/appl/fnd/12.0.0/secure/<context>.dbc file:
From: FND_MAX_JDBC_CONNECTIONS=500 To: FND_MAX_JDBC_CONNECTIONS=100
From: FND_JDBC_BUFFER_DECAY_INTERVAL=300 To: FND_JDBC_BUFFER_DECAY_INTERVAL=60
From: FND_JDBC_USABLE_CHECK=false To: FND_JDBC_USABLE_CHECK=true
From: FND_JDBC_BUFFER_MIN=1 To: FND_JDBC_BUFFER_MIN=5
From: FND_JDBC_BUFFER_MAX=5 To: FND_JDBC_BUFFER_MAX=50
Change /db/appl/ <contextname>/ inst/apps/<context_hostname>/appl/admin/ <context_hostname>. xml file:
From: s_fnd_max_jdbc_connections=500 To: s_fnd_max_jdbc_connections=100
From: s_fnd_jdbc_buffer_decay_interval=300 To: s_fnd_jdbc_buffer_decay_interval=60
From: s_fnd_jdbc_usable_check=false To: s_fnd_jdbc_usable_check=true
From: s_fnd_jdbc_buffermin=1 To: s_fnd_jdbc_buffermin=5
From: s_fnd_jdbc_buffermax=5 To: s_fnd_jdbc_buffermax=50
25. Concurrent Managers, Are They Sized Correctly?
Thumbrule for Concurrent queue’s configuration
Configuration examples of 3 “typical” queues:
Fast Queue
Sleep = 15 (seconds)
Cache Size = 10
Target = 5
Standard Queue Slow Queue
Sleep = 60 (seconds) Sleep = 60 (seconds)
Cache Size = 20 Cache Size = 10
Target = 10 Target = 5
Target
Make sure the number of targets (processes) don’t exceed more than 20 per queue, and also remember the “rule of thumb” 3-5
processes per CPU. This rule is in most cases exceeded.
Sleep
For a queue (e.g. CUST_FAST) running fast jobs set the Sleep to 15 (seconds).
For a queue (e.g. CUST_SLOW) running slow jobs set the Sleep to 60 (seconds).
For a standard queue (e.g. CUST_STANDARD or STANDARD) set the Sleep to 60 (seconds).
For any other queues set the Sleep to 60 (seconds).
Cache Size
If Cache Size (CS) is not set, then set the cache size equal to the target value. Set CS to 2 times target value for a fast, slow and standard
queue.
27. Concurrent Programs Analyzing
Group the concurrent programs based on the execution history
High Volume Jobs >1000
List all jobs where number of submissions was 1000 or more. The list is sorted according to number of
executions. Use this list to point out if a job and/or jobs that should be running in a different queue (e.g
fast, slow or standard queue, cross check the slow and fast jobs table to determine this).
Fast Running Jobs
All jobs where the runtime was 2 minutes or less. The list is sorted according to max runtime.
Use this list to point out if a job and/or jobs that should be running in a different queue.
Long Running Jobs During Peak Hour
List jobs with the start date and time of the jobs where the runtime was 30 minutes or more and was
submitted during the peak hours. The list is sorted according to max runtime.
Use this list to point out if a job and/or jobs could be submitted outside the peak hours.
Check if "Purge Concurrent Request and/or Manager Data" Concurrent Program is
Scheduled
Concurrent Programs That Have Trace Enabled
28. JVM Settings, Are They Sized Properly? Analyze Before Concluding
Node indxyzprd01
Full GC (avg)
Secs
Full GC (max)
Secs
Full GC (min)
Secs
Cou
nt
Pause Duration
(Secs)
FORMS 0.004344638 0.018386 0.002458 398 1.729166
OACORE 0.012064052 0.08705 0.001699 542 6.538716
OAFM 0.004308295 0.027928 0.002614 1056 4.549559
GVViewer or jvmstat
29. $LOG_HOME/ora/10.1.3/opmn/OC4J~oacore~default_group_*
Example:-
94562.018: [GC 670227K->595360K(892672K), 0.0221060 secs]
94617.600: [GC 672480K->617324K(799104K), 0.0307160 secs]
94648.483: [GC 694444K->623826K(872384K), 0.0405620 secs]
94706.754: [Full GC 756173K->264184K(790720K), 0.8990440 secs]
94718.575: [GC 458782K->424403K(737536K), 0.0471040 secs]
94740.380: [GC 501646K->436633K(793600K), 0.0656750 secs]
94817.197: [GC 512473K->441116K(795136K), 0.0749340 secs]
Description: -
Here the first column 94562.018, 94617.600 show the time in seconds when GC happened. Inside the square bracket it indicates whether it’s a minor GC or FULL
GC. That is followed by some number 670227K->595360K. The number on left side of -> indicate original size of live objects before GC and number after ->
indicate size of live objects after GC. Number in the bracket (892672K) indicates total size of live objects allocated. Number after comma indicates time it took to
complete garbage collection. For example in the first rows it took 0.0221060 secs for completing GC.
Review the frequency of collections; especially major collections (i.e. Full GC)
Recommendations to be given
a) Enable verbose GC to tune heap sizes based on the GC traffic
b) If full GCs are too frequent, consider increasing -Xms and -Xmx GC tuning
c) Bigger heaps => GC will take longer
d) Longer GCs => users may experience pauses
e) For the OACoreGroup JVMs start with the lower of the following two values:
Number of cores on the middle tier
Peak Concurrent users / 100
Note:- If you have 2 x Dual Core CPUs in your server and have 500 peak users, then 4 JVMs is the recommended starting point, since the number of cores is the lower
number. However, if you only had 300 peak users, then you would configure 3 JVMs as a starting point as this is now the lower figure. Size your maximum heap size
as either 512 MB or 1 GB. If you start with 512 MB and find that more memory is required, increase to a maximum of 1 GB. If more memory than 1 GB is required,
then add another JVM instead (free physical memory allowing) to increase the total memory available overall.
JVM Settings, Are They Sized Properly? Analyze Before Concluding
32. Create An Impact With Capacity Analysis
So let’s see an example of 4-cpu Intel boxes and put them together in a cluster with Oracle11G and RAC
on top:
Price for the hardware: About US$15,000 or so.
Price for the OS (Linux): About US$ 0.5- or thereabout (it depends!)
Price for Oracle w/ RAC: US$480,000,-
So that’s half a million to Oracle. Put another way: It’s 1 dollar to the box movers for every 32 dollars paid
for Oracle RAC.
Psychologically it’s hard for the customers to understand that they have to buy something that expensive
to run on such cheap hardware. The gap is too big, and Oracle will need to address it soon.
There’s nothing like RAC on the market, but that doesn’t mean you have to buy RAC. I usually joke that
it’s like buying a car for US$10.000,- that has all the facilities you need from a good and stable car.
Airbags and ABS brakes are US$500.000,- extra, by the way. Well, airbags and ABS are wonderful to have
and they increase your security. But it’s a lot of money compared to the basic car price.