This report summarizes the workload on the ERPSIT database with the following key details:
- The database has 2 instances and is hosted on a Linux server with 4 CPUs and 7.8GB of memory.
- Between snapshots 3004 and 3005, there was 60.1 minutes of activity with 174 sessions.
- The largest consumers of database time were SQL execute elapsed time at 94.7% and DB CPU time at 63.4%.
Extreme Linux Performance Monitoring and TuningMilind Koyande
This document provides an introduction to monitoring Linux system performance. It discusses determining the type of application running and establishing a baseline of typical system usage. Key CPU concepts are then outlined such as hardware interrupts, soft interrupts, real-time threads and kernel/user threads. Context switches between threads and the thread scheduling queue are also introduced. The goal is to understand typical system behavior and identify any bottlenecks.
Kernel Recipes 2019 - ftrace: Where modifying a running kernel all startedAnne Nicolas
The document describes the ftrace function tracing tool in Linux kernels. It allows attaching to functions in the kernel to trace function calls. It works by having the GCC compiler insert indirect function entry calls. These calls are recorded during linking and replaced with nops at boot time for efficiency. This allows function tracing with low overhead by tracing the indirect function entry calls.
zfsday talk (a video is on the last slide). The performance of the file system, or disks, is often the target of blame, especially in multi-tenant cloud environments. At Joyent we deploy a public cloud on ZFS-based systems, and frequently investigate performance with a wide variety of applications in growing environments. This talk is about ZFS performance observability, showing the tools and approaches we use to quickly show what ZFS is doing. This includes observing ZFS I/O throttling, an enhancement added to illumos-ZFS to isolate performance between neighbouring tenants, and the use of DTrace and heat maps to examine latency distributions and locate outliers.
Computing Performance: On the Horizon (2021)Brendan Gregg
Talk by Brendan Gregg for USENIX LISA 2021. https://www.youtube.com/watch?v=5nN1wjA_S30 . "The future of computer performance involves clouds with hardware hypervisors and custom processors, servers running a new type of BPF software to allow high-speed applications and kernel customizations, observability of everything in production, new Linux kernel technologies, and more. This talk covers interesting developments in systems and computing performance, their challenges, and where things are headed."
The document outlines common problems and solutions for optimizing performance in Oracle Real Application Clusters (RAC). It discusses RAC fundamentals like architecture and cache fusion. Common problems discussed include lost blocks due to interconnect issues, disk I/O bottlenecks, and expensive queries. Diagnostics tools like AWR and ADDM can identify cluster-wide I/O and query plan issues impacting performance. Configuring the private interconnect, I/O, and addressing bad SQL can help resolve performance problems.
VMworld 2013
Lenin Singaravelu, VMware
Haoqiang Zheng, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Kernel Recipes 2017: Performance Analysis with BPFBrendan Gregg
Talk by Brendan Gregg at Kernel Recipes 2017 (Paris): "The in-kernel Berkeley Packet Filter (BPF) has been enhanced in recent kernels to do much more than just filtering packets. It can now run user-defined programs on events, such as on tracepoints, kprobes, uprobes, and perf_events, allowing advanced performance analysis tools to be created. These can be used in production as the BPF virtual machine is sandboxed and will reject unsafe code, and are already in use at Netflix.
Beginning with the bpf() syscall in 3.18, enhancements have been added in many kernel versions since, with major features for BPF analysis landing in Linux 4.1, 4.4, 4.7, and 4.9. Specific capabilities these provide include custom in-kernel summaries of metrics, custom latency measurements, and frequency counting kernel and user stack traces on events. One interesting case involves saving stack traces on wake up events, and associating them with the blocked stack trace: so that we can see the blocking stack trace and the waker together, merged in kernel by a BPF program (that particular example is in the kernel as samples/bpf/offwaketime).
This talk will discuss the new BPF capabilities for performance analysis and debugging, and demonstrate the new open source tools that have been developed to use it, many of which are in the Linux Foundation iovisor bcc (BPF Compiler Collection) project. These include tools to analyze the CPU scheduler, TCP performance, file system performance, block I/O, and more."
How Netflix Tunes EC2 Instances for PerformanceBrendan Gregg
CMP325 talk for AWS re:Invent 2017, by Brendan Gregg. "
At Netflix we make the best use of AWS EC2 instance types and features to create a high performance cloud, achieving near bare metal speed for our workloads. This session will summarize the configuration, tuning, and activities for delivering the fastest possible EC2 instances, and will help other EC2 users improve performance, reduce latency outliers, and make better use of EC2 features. We'll show how we choose EC2 instance types, how we choose between EC2 Xen modes: HVM, PV, and PVHVM, and the importance of EC2 features such SR-IOV for bare-metal performance. SR-IOV is used by EC2 enhanced networking, and recently for the new i3 instance type for enhanced disk performance as well. We'll also cover kernel tuning and observability tools, from basic to advanced. Advanced performance analysis includes the use of Java and Node.js flame graphs, and the new EC2 Performance Monitoring Counter (PMC) feature released this year."
Extreme Linux Performance Monitoring and TuningMilind Koyande
This document provides an introduction to monitoring Linux system performance. It discusses determining the type of application running and establishing a baseline of typical system usage. Key CPU concepts are then outlined such as hardware interrupts, soft interrupts, real-time threads and kernel/user threads. Context switches between threads and the thread scheduling queue are also introduced. The goal is to understand typical system behavior and identify any bottlenecks.
Kernel Recipes 2019 - ftrace: Where modifying a running kernel all startedAnne Nicolas
The document describes the ftrace function tracing tool in Linux kernels. It allows attaching to functions in the kernel to trace function calls. It works by having the GCC compiler insert indirect function entry calls. These calls are recorded during linking and replaced with nops at boot time for efficiency. This allows function tracing with low overhead by tracing the indirect function entry calls.
zfsday talk (a video is on the last slide). The performance of the file system, or disks, is often the target of blame, especially in multi-tenant cloud environments. At Joyent we deploy a public cloud on ZFS-based systems, and frequently investigate performance with a wide variety of applications in growing environments. This talk is about ZFS performance observability, showing the tools and approaches we use to quickly show what ZFS is doing. This includes observing ZFS I/O throttling, an enhancement added to illumos-ZFS to isolate performance between neighbouring tenants, and the use of DTrace and heat maps to examine latency distributions and locate outliers.
Computing Performance: On the Horizon (2021)Brendan Gregg
Talk by Brendan Gregg for USENIX LISA 2021. https://www.youtube.com/watch?v=5nN1wjA_S30 . "The future of computer performance involves clouds with hardware hypervisors and custom processors, servers running a new type of BPF software to allow high-speed applications and kernel customizations, observability of everything in production, new Linux kernel technologies, and more. This talk covers interesting developments in systems and computing performance, their challenges, and where things are headed."
The document outlines common problems and solutions for optimizing performance in Oracle Real Application Clusters (RAC). It discusses RAC fundamentals like architecture and cache fusion. Common problems discussed include lost blocks due to interconnect issues, disk I/O bottlenecks, and expensive queries. Diagnostics tools like AWR and ADDM can identify cluster-wide I/O and query plan issues impacting performance. Configuring the private interconnect, I/O, and addressing bad SQL can help resolve performance problems.
VMworld 2013
Lenin Singaravelu, VMware
Haoqiang Zheng, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Kernel Recipes 2017: Performance Analysis with BPFBrendan Gregg
Talk by Brendan Gregg at Kernel Recipes 2017 (Paris): "The in-kernel Berkeley Packet Filter (BPF) has been enhanced in recent kernels to do much more than just filtering packets. It can now run user-defined programs on events, such as on tracepoints, kprobes, uprobes, and perf_events, allowing advanced performance analysis tools to be created. These can be used in production as the BPF virtual machine is sandboxed and will reject unsafe code, and are already in use at Netflix.
Beginning with the bpf() syscall in 3.18, enhancements have been added in many kernel versions since, with major features for BPF analysis landing in Linux 4.1, 4.4, 4.7, and 4.9. Specific capabilities these provide include custom in-kernel summaries of metrics, custom latency measurements, and frequency counting kernel and user stack traces on events. One interesting case involves saving stack traces on wake up events, and associating them with the blocked stack trace: so that we can see the blocking stack trace and the waker together, merged in kernel by a BPF program (that particular example is in the kernel as samples/bpf/offwaketime).
This talk will discuss the new BPF capabilities for performance analysis and debugging, and demonstrate the new open source tools that have been developed to use it, many of which are in the Linux Foundation iovisor bcc (BPF Compiler Collection) project. These include tools to analyze the CPU scheduler, TCP performance, file system performance, block I/O, and more."
How Netflix Tunes EC2 Instances for PerformanceBrendan Gregg
CMP325 talk for AWS re:Invent 2017, by Brendan Gregg. "
At Netflix we make the best use of AWS EC2 instance types and features to create a high performance cloud, achieving near bare metal speed for our workloads. This session will summarize the configuration, tuning, and activities for delivering the fastest possible EC2 instances, and will help other EC2 users improve performance, reduce latency outliers, and make better use of EC2 features. We'll show how we choose EC2 instance types, how we choose between EC2 Xen modes: HVM, PV, and PVHVM, and the importance of EC2 features such SR-IOV for bare-metal performance. SR-IOV is used by EC2 enhanced networking, and recently for the new i3 instance type for enhanced disk performance as well. We'll also cover kernel tuning and observability tools, from basic to advanced. Advanced performance analysis includes the use of Java and Node.js flame graphs, and the new EC2 Performance Monitoring Counter (PMC) feature released this year."
Automatic NUMA balancing aims to improve performance on systems with Non-Uniform Memory Access (NUMA) by tracking where tasks access memory and placing tasks on nodes where their memory is located. It uses NUMA hinting page faults, page migration, task grouping, and fault statistics to determine optimal task placement. Pseudo-interleaving spreads tasks and memory across nodes to maximize memory bandwidth for workloads spanning multiple nodes. Evaluation shows automatic NUMA balancing can provide performance benefits for many workloads on NUMA systems without manual tuning.
Velocity 2017 Performance analysis superpowers with Linux eBPFBrendan Gregg
Talk by for Velocity 2017 by Brendan Gregg: Performance analysis superpowers with Linux eBPF.
"Advanced performance observability and debugging have arrived built into the Linux 4.x series, thanks to enhancements to Berkeley Packet Filter (BPF, or eBPF) and the repurposing of its sandboxed virtual machine to provide programmatic capabilities to system tracing. Netflix has been investigating its use for new observability tools, monitoring, security uses, and more. This talk will investigate this new technology, which sooner or later will be available to everyone who uses Linux. The talk will dive deep on these new tracing, observability, and debugging capabilities. Whether you’re doing analysis over an ssh session, or via a monitoring GUI, BPF can be used to provide an efficient, custom, and deep level of detail into system and application performance.
This talk will also demonstrate the new open source tools that have been developed, which make use of kernel- and user-level dynamic tracing (kprobes and uprobes), and kernel- and user-level static tracing (tracepoints). These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and a whole lot more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations."
Talk for QConSF 2015: "Broken benchmarks, misleading metrics, and terrible tools. This talk will help you navigate the treacherous waters of system performance tools, touring common problems with system metrics, monitoring, statistics, visualizations, measurement overhead, and benchmarks. This will likely involve some unlearning, as you discover tools you have been using for years, are in fact, misleading, dangerous, or broken.
The speaker, Brendan Gregg, has given many popular talks on operating system performance tools. This is an anti-version of these talks, to focus on broken tools and metrics instead of the working ones. Metrics can be misleading, and counters can be counter-intuitive! This talk will include advice and methodologies for verifying new performance tools, understanding how they work, and using them successfully."
Broken benchmarks, misleading metrics, and terrible tools. This talk will help you navigate the treacherous waters of Linux performance tools, touring common problems with system tools, metrics, statistics, visualizations, measurement overhead, and benchmarks. You might discover that tools you have been using for years, are in fact, misleading, dangerous, or broken.
The speaker, Brendan Gregg, has given many talks on tools that work, including giving the Linux PerformanceTools talk originally at SCALE. This is an anti-version of that talk, to focus on broken tools and metrics instead of the working ones. Metrics can be misleading, and counters can be counter-intuitive! This talk will include advice for verifying new performance tools, understanding how they work, and using them successfully.
Shak larry-jeder-perf-and-tuning-summit14-part1-finalTommy Lee
This document provides an overview and agenda for a performance analysis and tuning presentation focusing on Red Hat Enterprise Linux evolution, NUMA scheduling improvements, and use of cgroups/containers for resource management. Key points include how RHEL has incorporated features like tuned profiles, transparent hugepages, automatic NUMA balancing, and how cgroups can guarantee quality of service and enable dynamic resource allocation for multi-application environments. Performance results are shown for databases and SPEC benchmarks utilizing these features.
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
Kernel Recipes 2017 - Understanding the Linux kernel via ftrace - Steven RostedtAnne Nicolas
Ftrace is the official tracer of the Linux kernel. It has been apart of Linux since 2.6.31, and has grown tremendously ever since. Ftrace’s name comes from its most powerful feature: function tracing. But the ftrace infrastructure is much more than that. It also encompasses the trace events that are used by perf, as well as kprobes that can dynamically add trace events that the user defines.
This talk will focus on learning how the kernel works by using the ftrace infrastructure. It will show how to see what happens within the kernel during a system call; learn how interrupts work; see how ones processes are being scheduled, and more. A quick introduction to some tools like trace-cmd and KernelShark will also be demonstrated.
Steven Rostedt, VMware
The document summarizes a talk on container performance analysis. It discusses identifying bottlenecks at the host, container, and kernel level using various Linux performance tools. It then provides an overview of how containers work in Linux using namespaces and control groups (cgroups). Finally, it demonstrates some example commands like docker stats, systemd-cgtop, and bcc/BPF tools that can be used to analyze containers and cgroups from the host system.
Talk for Facebook Systems@Scale 2021 by Brendan Gregg: "BPF (eBPF) tracing is the superpower that can analyze everything, helping you find performance wins, troubleshoot software, and more. But with many different front-ends and languages, and years of evolution, finding the right starting point can be hard. This talk will make it easy, showing how to install and run selected BPF tools in the bcc and bpftrace open source projects for some quick wins. Think like a sysadmin, not like a programmer."
OSSNA 2017 Performance Analysis Superpowers with Linux BPFBrendan Gregg
Talk by Brendan Gregg for OSSNA 2017. "Advanced performance observability and debugging have arrived built into the Linux 4.x series, thanks to enhancements to Berkeley Packet Filter (BPF, or eBPF) and the repurposing of its sandboxed virtual machine to provide programmatic capabilities to system tracing. Netflix has been investigating its use for new observability tools, monitoring, security uses, and more. This talk will be a dive deep on these new tracing, observability, and debugging capabilities, which sooner or later will be available to everyone who uses Linux. Whether you’re doing analysis over an ssh session, or via a monitoring GUI, BPF can be used to provide an efficient, custom, and deep level of detail into system and application performance.
This talk will also demonstrate the new open source tools that have been developed, which make use of kernel- and user-level dynamic tracing (kprobes and uprobes), and kernel- and user-level static tracing (tracepoints). These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and a whole lot more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations."
Shak larry-jeder-perf-and-tuning-summit14-part2-finalTommy Lee
This document provides an overview of performance analysis and tuning techniques in Red Hat Enterprise Linux (RHEL). It discusses the tuned profile packages and how they optimize systems for different workloads. Specific topics covered include disk I/O tuning, memory tuning, network performance tuning, and power management techniques. A variety of Linux performance analysis tools are also introduced, including tuned, turbostat, netsniff-ng, and Performance Co-Pilot.
1. DPDK achieves high throughput packet processing on commodity hardware by reducing kernel overhead through techniques like polling, huge pages, and userspace drivers.
2. In Linux, packet processing involves expensive operations like system calls, interrupts, and data copying between kernel and userspace. DPDK avoids these by doing all packet processing in userspace.
3. DPDK uses techniques like isolating cores for packet I/O threads, lockless ring buffers, and NUMA awareness to further optimize performance. It can achieve throughput of over 14 million packets per second on 10GbE interfaces.
The document provides a history of updates to the 7-Zip file compression software from version 4.65 in 2009 to version 4.18 beta in 2005. Major updates include support for additional archive formats, encryption methods, bug fixes, speed optimizations, and localization additions. Each version listing includes release date and brief descriptions of changes and improvements made in that version.
Your Linux AMI: Optimization and Performance (CPN302) | AWS re:Invent 2013Amazon Web Services
Your AMI is one of the core foundations for running applications and services effectively on Amazon EC2. In this session, you'll learn how to optimize your AMI, including how you can measure and diagnose system performance and tune parameters for improved CPU and network performance. We'll cover application-specific examples from Netflix on how optimized AMIs can lead to improved performance.
The document discusses PROSE (Partitioned Reliable Operating System Environment), an approach that runs applications in specialized kernel partitions for finer control over system resources and improved reliability. It aims to simplify development of specialized kernels and enable resource sharing across partitions. The approach is evaluated using IBM's research hypervisor rHype, which shows PROSE can reduce noise and provide more deterministic performance than Linux. Future work focuses on running larger commercial workloads and further performance/noise experiments.
[Open Infrastructure & Cloud Native Days Korea 2019]
커뮤니티 버전의 OpenStack 과 Ceph를 활용하여 대고객서비스를 구축한 사례를 공유합니다. 유연성을 확보한 기업용 클라우드 서비스 구축 사례와 높은 수준의 보안을 요구하는 거래소 서비스를 구축, 운영한 사례를 소개합니다. 또한 이 프로젝트에 사용된 기술 스택 및 장애 해결사례와 최적화 방안을 소개합니다. 오픈스택은 역시 오픈소스컨설팅입니다.
#openstack #ceph #openinfraday #cloudnative #opensourceconsulting
This document provides an overview of kernel tuning and customizing for performance on Enterprise Linux. It discusses monitoring tools, basic tuning steps like disabling unused services, memory tuning including hugepages and transparent huge pages, swap/cache tuning. It also covers I/O and filesystem tuning and networking tuning. The goal is to provide concepts and approaches for tuning the major components to optimize performance.
This document provides a performance engineer's predictions for computing performance trends in 2021 and beyond. The engineer discusses trends in processors, memory, disks, networking, runtimes, kernels, hypervisors, and observability. For processors, predictions include multi-socket systems becoming less common, the future of simultaneous multithreading being unclear, practical core count limits being reached in the 2030s, and more processor vendors including ARM-based and RISC-V options. Memory predictions focus on many workloads being memory-bound currently.
The document discusses using Automatic Workload Repository (AWR) to analyze IO subsystem performance. It provides examples of AWR reports including foreground and background wait events, operating system statistics, wait histograms. The document recommends using this data to identify IO bottlenecks and guide tuning efforts like optimizing indexes to reduce full table scans.
The document provides an overview of using Automatic Workload Repository (AWR) for memory analysis in an Oracle database. It discusses various memory structures like the database buffer cache, shared pool, and process memory. It outlines signs of memory issues and describes analyzing the top waits, load profile, instance efficiency, SQL areas, and other AWR report sections to identify and address performance problems related to memory configuration and usage.
Automatic NUMA balancing aims to improve performance on systems with Non-Uniform Memory Access (NUMA) by tracking where tasks access memory and placing tasks on nodes where their memory is located. It uses NUMA hinting page faults, page migration, task grouping, and fault statistics to determine optimal task placement. Pseudo-interleaving spreads tasks and memory across nodes to maximize memory bandwidth for workloads spanning multiple nodes. Evaluation shows automatic NUMA balancing can provide performance benefits for many workloads on NUMA systems without manual tuning.
Velocity 2017 Performance analysis superpowers with Linux eBPFBrendan Gregg
Talk by for Velocity 2017 by Brendan Gregg: Performance analysis superpowers with Linux eBPF.
"Advanced performance observability and debugging have arrived built into the Linux 4.x series, thanks to enhancements to Berkeley Packet Filter (BPF, or eBPF) and the repurposing of its sandboxed virtual machine to provide programmatic capabilities to system tracing. Netflix has been investigating its use for new observability tools, monitoring, security uses, and more. This talk will investigate this new technology, which sooner or later will be available to everyone who uses Linux. The talk will dive deep on these new tracing, observability, and debugging capabilities. Whether you’re doing analysis over an ssh session, or via a monitoring GUI, BPF can be used to provide an efficient, custom, and deep level of detail into system and application performance.
This talk will also demonstrate the new open source tools that have been developed, which make use of kernel- and user-level dynamic tracing (kprobes and uprobes), and kernel- and user-level static tracing (tracepoints). These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and a whole lot more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations."
Talk for QConSF 2015: "Broken benchmarks, misleading metrics, and terrible tools. This talk will help you navigate the treacherous waters of system performance tools, touring common problems with system metrics, monitoring, statistics, visualizations, measurement overhead, and benchmarks. This will likely involve some unlearning, as you discover tools you have been using for years, are in fact, misleading, dangerous, or broken.
The speaker, Brendan Gregg, has given many popular talks on operating system performance tools. This is an anti-version of these talks, to focus on broken tools and metrics instead of the working ones. Metrics can be misleading, and counters can be counter-intuitive! This talk will include advice and methodologies for verifying new performance tools, understanding how they work, and using them successfully."
Broken benchmarks, misleading metrics, and terrible tools. This talk will help you navigate the treacherous waters of Linux performance tools, touring common problems with system tools, metrics, statistics, visualizations, measurement overhead, and benchmarks. You might discover that tools you have been using for years, are in fact, misleading, dangerous, or broken.
The speaker, Brendan Gregg, has given many talks on tools that work, including giving the Linux PerformanceTools talk originally at SCALE. This is an anti-version of that talk, to focus on broken tools and metrics instead of the working ones. Metrics can be misleading, and counters can be counter-intuitive! This talk will include advice for verifying new performance tools, understanding how they work, and using them successfully.
Shak larry-jeder-perf-and-tuning-summit14-part1-finalTommy Lee
This document provides an overview and agenda for a performance analysis and tuning presentation focusing on Red Hat Enterprise Linux evolution, NUMA scheduling improvements, and use of cgroups/containers for resource management. Key points include how RHEL has incorporated features like tuned profiles, transparent hugepages, automatic NUMA balancing, and how cgroups can guarantee quality of service and enable dynamic resource allocation for multi-application environments. Performance results are shown for databases and SPEC benchmarks utilizing these features.
Talk by Brendan Gregg for USENIX LISA 2019: Linux Systems Performance. Abstract: "
Systems performance is an effective discipline for performance analysis and tuning, and can help you find performance wins for your applications and the kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas of Linux systems performance: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (Ftrace, bcc/BPF, and bpftrace/BPF), and much advice about what is and isn't important to learn. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud."
Kernel Recipes 2017 - Understanding the Linux kernel via ftrace - Steven RostedtAnne Nicolas
Ftrace is the official tracer of the Linux kernel. It has been apart of Linux since 2.6.31, and has grown tremendously ever since. Ftrace’s name comes from its most powerful feature: function tracing. But the ftrace infrastructure is much more than that. It also encompasses the trace events that are used by perf, as well as kprobes that can dynamically add trace events that the user defines.
This talk will focus on learning how the kernel works by using the ftrace infrastructure. It will show how to see what happens within the kernel during a system call; learn how interrupts work; see how ones processes are being scheduled, and more. A quick introduction to some tools like trace-cmd and KernelShark will also be demonstrated.
Steven Rostedt, VMware
The document summarizes a talk on container performance analysis. It discusses identifying bottlenecks at the host, container, and kernel level using various Linux performance tools. It then provides an overview of how containers work in Linux using namespaces and control groups (cgroups). Finally, it demonstrates some example commands like docker stats, systemd-cgtop, and bcc/BPF tools that can be used to analyze containers and cgroups from the host system.
Talk for Facebook Systems@Scale 2021 by Brendan Gregg: "BPF (eBPF) tracing is the superpower that can analyze everything, helping you find performance wins, troubleshoot software, and more. But with many different front-ends and languages, and years of evolution, finding the right starting point can be hard. This talk will make it easy, showing how to install and run selected BPF tools in the bcc and bpftrace open source projects for some quick wins. Think like a sysadmin, not like a programmer."
OSSNA 2017 Performance Analysis Superpowers with Linux BPFBrendan Gregg
Talk by Brendan Gregg for OSSNA 2017. "Advanced performance observability and debugging have arrived built into the Linux 4.x series, thanks to enhancements to Berkeley Packet Filter (BPF, or eBPF) and the repurposing of its sandboxed virtual machine to provide programmatic capabilities to system tracing. Netflix has been investigating its use for new observability tools, monitoring, security uses, and more. This talk will be a dive deep on these new tracing, observability, and debugging capabilities, which sooner or later will be available to everyone who uses Linux. Whether you’re doing analysis over an ssh session, or via a monitoring GUI, BPF can be used to provide an efficient, custom, and deep level of detail into system and application performance.
This talk will also demonstrate the new open source tools that have been developed, which make use of kernel- and user-level dynamic tracing (kprobes and uprobes), and kernel- and user-level static tracing (tracepoints). These tools provide new insights for file system and storage performance, CPU scheduler performance, TCP performance, and a whole lot more. This is a major turning point for Linux systems engineering, as custom advanced performance instrumentation can be used safely in production environments, powering a new generation of tools and visualizations."
Shak larry-jeder-perf-and-tuning-summit14-part2-finalTommy Lee
This document provides an overview of performance analysis and tuning techniques in Red Hat Enterprise Linux (RHEL). It discusses the tuned profile packages and how they optimize systems for different workloads. Specific topics covered include disk I/O tuning, memory tuning, network performance tuning, and power management techniques. A variety of Linux performance analysis tools are also introduced, including tuned, turbostat, netsniff-ng, and Performance Co-Pilot.
1. DPDK achieves high throughput packet processing on commodity hardware by reducing kernel overhead through techniques like polling, huge pages, and userspace drivers.
2. In Linux, packet processing involves expensive operations like system calls, interrupts, and data copying between kernel and userspace. DPDK avoids these by doing all packet processing in userspace.
3. DPDK uses techniques like isolating cores for packet I/O threads, lockless ring buffers, and NUMA awareness to further optimize performance. It can achieve throughput of over 14 million packets per second on 10GbE interfaces.
The document provides a history of updates to the 7-Zip file compression software from version 4.65 in 2009 to version 4.18 beta in 2005. Major updates include support for additional archive formats, encryption methods, bug fixes, speed optimizations, and localization additions. Each version listing includes release date and brief descriptions of changes and improvements made in that version.
Your Linux AMI: Optimization and Performance (CPN302) | AWS re:Invent 2013Amazon Web Services
Your AMI is one of the core foundations for running applications and services effectively on Amazon EC2. In this session, you'll learn how to optimize your AMI, including how you can measure and diagnose system performance and tune parameters for improved CPU and network performance. We'll cover application-specific examples from Netflix on how optimized AMIs can lead to improved performance.
The document discusses PROSE (Partitioned Reliable Operating System Environment), an approach that runs applications in specialized kernel partitions for finer control over system resources and improved reliability. It aims to simplify development of specialized kernels and enable resource sharing across partitions. The approach is evaluated using IBM's research hypervisor rHype, which shows PROSE can reduce noise and provide more deterministic performance than Linux. Future work focuses on running larger commercial workloads and further performance/noise experiments.
[Open Infrastructure & Cloud Native Days Korea 2019]
커뮤니티 버전의 OpenStack 과 Ceph를 활용하여 대고객서비스를 구축한 사례를 공유합니다. 유연성을 확보한 기업용 클라우드 서비스 구축 사례와 높은 수준의 보안을 요구하는 거래소 서비스를 구축, 운영한 사례를 소개합니다. 또한 이 프로젝트에 사용된 기술 스택 및 장애 해결사례와 최적화 방안을 소개합니다. 오픈스택은 역시 오픈소스컨설팅입니다.
#openstack #ceph #openinfraday #cloudnative #opensourceconsulting
This document provides an overview of kernel tuning and customizing for performance on Enterprise Linux. It discusses monitoring tools, basic tuning steps like disabling unused services, memory tuning including hugepages and transparent huge pages, swap/cache tuning. It also covers I/O and filesystem tuning and networking tuning. The goal is to provide concepts and approaches for tuning the major components to optimize performance.
This document provides a performance engineer's predictions for computing performance trends in 2021 and beyond. The engineer discusses trends in processors, memory, disks, networking, runtimes, kernels, hypervisors, and observability. For processors, predictions include multi-socket systems becoming less common, the future of simultaneous multithreading being unclear, practical core count limits being reached in the 2030s, and more processor vendors including ARM-based and RISC-V options. Memory predictions focus on many workloads being memory-bound currently.
The document discusses using Automatic Workload Repository (AWR) to analyze IO subsystem performance. It provides examples of AWR reports including foreground and background wait events, operating system statistics, wait histograms. The document recommends using this data to identify IO bottlenecks and guide tuning efforts like optimizing indexes to reduce full table scans.
The document provides an overview of using Automatic Workload Repository (AWR) for memory analysis in an Oracle database. It discusses various memory structures like the database buffer cache, shared pool, and process memory. It outlines signs of memory issues and describes analyzing the top waits, load profile, instance efficiency, SQL areas, and other AWR report sections to identify and address performance problems related to memory configuration and usage.
The document discusses monitoring and tuning Oracle databases on z/OS and z/Linux systems. It provides an overview of using Statspack to diagnose performance issues from high CPU usage, I/O utilization, or memory usage based on timed events, SQL statements, and tablespace I/O statistics. Potential causes and remedies are described for each area that could lead to bad response times.
AWR Ambiguity: Performance reasoning when the numbers don't add upJohn Beresniewicz
A close look at an AWR report where DB Time is exceeded by the sum of DB CPU and foreground wait time. We recall core Oracle performance principles and instrumentation design on the way to untangling the confusion.
The document is an AWR report that provides key statistics and configuration details about an Oracle database called AULTDB over a 60 minute period. It includes information like the number of sessions, database startup time, cache sizes, and wait events. The report is intended to help analyze wait times and identify potential performance bottlenecks in the database.
Statspack provides concise performance summaries of Oracle databases. It was introduced in Oracle 8.1.6 to highlight top wait events and has expanded over time to include additional metrics. The tool takes snapshots of key performance views and calculates deltas and ratios to analyze where time is being spent and identify potential areas for optimization.
Troubleshooting Complex Oracle Performance Problems with Tanel PoderTanel Poder
The document describes troubleshooting a performance issue involving parallel data loads into a data warehouse. It is determined that the slowness is due to recursive locking and buffer busy waits occurring during inserts into the SEG$ table as new segments are created by parallel CREATE TABLE AS SELECT statements. This is causing a nested locking ping-pong effect between the cache, transaction, and I/O layers as sessions repeatedly acquire and release locks and buffers.
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...Виталий Стародубцев
##Что такое Storage Replica
##Архитектура и сценарии
##Синхронная и асинхронная репликация
##Междисковая, межсерверная, внутрикластерная и межкластерная репликация
##Дизайн и проектирование Storage Replica
##Нововведения в Windows Server 2016 TP5
##Графический интерфейс управления, и другие возможности - демонстрация и планы развития
##Интеграция Storage Replica с Storage Spaces Direct
This document contains a workload repository report for a database named DB11G. Key details include:
- The database ran on a Linux server with 1 CPU and 1.96GB of memory.
- Between two snapshots taken an hour apart, the average wait time per session was 4.8-5.1 seconds.
- The top foreground wait event was log file sync, taking up 9.15% of database time.
This slide will show you how to use SOFA to do performance analysis of CPU/GPU cooperative programs, especially programs running with deep software stacks like TensorFlow, PyTorch, etc.
source code at:
https://github.com/cyliustack/sofa
This document provides recommendations for system capacity planning for an Oracle database:
- Plan for 1 CPU per 200 concurrent users and prefer medium speed CPUs over fewer faster CPUs.
- Reserve 10% of memory for the operating system and allocate 220 MB for the Oracle SGA and 3 MB per user process.
- Use striped and mirrored or striped with parity RAID for disks. Consider raw devices or SANs if possible.
- Ensure the network capacity is adequate based on site size.
On X86 systems, using an Unbreakable Enterprise Kernel (UEK) is recommended over other enterprise distributions as it provides better hardware support, security patches, and testing from the larger Linux community. Key configuration recommendations include enabling maximum CPU performance in BIOS, using memory types validated by Oracle, ensuring proper NUMA and CPU frequency settings, and installing only Oracle-validated packages to avoid issues. Monitoring tools like top, iostat, sar and ksar help identify any CPU, memory, disk or I/O bottlenecks.
Talk for YOW! by Brendan Gregg. "Systems performance studies the performance of computing systems, including all physical components and the full software stack to help you find performance wins for your application and kernel. However, most of us are not performance or kernel engineers, and have limited time to study this topic. This talk summarizes the topic for everyone, touring six important areas: observability tools, methodologies, benchmarking, profiling, tracing, and tuning. Included are recipes for Linux performance analysis and tuning (using vmstat, mpstat, iostat, etc), overviews of complex areas including profiling (perf_events) and tracing (ftrace, bcc/BPF, and bpftrace/BPF), advice about what is and isn't important to learn, and case studies to see how it is applied. This talk is aimed at everyone: developers, operations, sysadmins, etc, and in any environment running Linux, bare metal or the cloud.
"
Kernel Recipes 2016 - Understanding a Real-Time System (more than just a kernel)Anne Nicolas
The PREEMPT_RT patch turns Linux into a hard Real-Time designed operating system. But it takes more than just a kernel to make sure you can meet all your requirements. This talk explains all aspects of the system that is being used for a mission critical project that must be considered. Creating a Real-Time environment is difficult and there is no simple solution to make sure that your system is capable to fulfill its needs. One must be vigilant with all aspects of the system to make sure there are no surprises. This talk will discuss most of the “gotchas” that come with putting together a Real-Time system.
You don’t need to be a developer to enjoy this talk. If you are curious to know how your computer is an unpredictable mess you should definitely come to this talk.
Steven Rostedt - Red Hat
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudCeph Community
Cisco Cloud Services provides an OpenStack platform to Cisco SaaS applications using a petabyte-scale Ceph cluster. The initial Ceph cluster design led to stability problems as usage grew past 50% capacity. Improvements such as client IO throttling, NVMe journaling, upgrading Ceph versions, and moving the MON levelDB to SSD stabilized the cluster and reduced recovery times from hardware failures. Lessons learned included the need for devops practices, knowledge sharing, performance modeling, and avoiding technical debt from shortcuts.
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
Cisco Cloud Services provides an OpenStack platform to Cisco SaaS applications using a worldwide deployment of Ceph clusters storing petabytes of data. The initial Ceph cluster design experienced major stability problems as the cluster grew past 50% capacity. Strategies were implemented to improve stability including client IO throttling, backfill and recovery throttling, upgrading Ceph versions, adding NVMe journals, moving the MON levelDB to SSDs, rebalancing the cluster, and proactively detecting slow disks. Lessons learned included the importance of devops practices, sharing knowledge, rigorous testing, and balancing performance, cost and time.
Performance tweaks and tools for Linux (Joe Damato)Ontico
The document discusses various Linux performance analysis tools including lsof to list open files, strace to trace system calls, tcpdump to dump network traffic, perftools from Google for profiling CPU usage, and a Ruby library called perftools.rb for profiling Ruby code. Examples are provided for using these tools to analyze memory usage, slow queries, Ruby interpreter signals, thread scheduling overhead, and identifying hot spots in Ruby web applications.
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Programinside-BigData.com
In this video from the DDN User Group at SC16, Sven Oehme Chief Research Strategist, IBM, presents "Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program."
Watch the video presentation: http://wp.me/p3RLHQ-g52
Sign up for our insideHPC Newsletter: http://wp.me/p3RLHQ-g52
- The document discusses current R&D work on pre-Exascale HPC systems, including a PRACE 2011 prototype that delivers over 10 TFLOPS in a single rack using heterogeneous hardware with GPUs and achieves over 1.1 TFLOPS/kW efficiency.
- Performance debugging techniques are discussed for multi-socket, multi-chipset, multi-GPU systems to analyze issues like bottlenecks in the cache hierarchy topology and imbalanced I/O. Affinity and memory binding are important to optimize performance.
- Linux and Windows tools like HWLOC can be used to set CPU and GPU affinity as well as memory binding to improve data transfer rates between devices by ensuring local memory access.
HAProxy is a free, open-source load balancer and proxy server. It is fast, reliable, and widely used. Some common uses of HAProxy include load balancing HTTP traffic, using access control lists to route requests, handling HTTPS traffic, load balancing MySQL databases, and proxying SSH connections. The latest version of HAProxy introduced new features like connection tracking, limiting connections per IP address, and peer synchronization between HAProxy instances. HAProxy provides high performance, flexibility, and scalability for traffic routing and distribution.
1. WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Startup Time Release RAC
------------ ----------- ------------ -------- --------------- ----------- ---
ERPSIT 851203393 ERPSIT1 1 17-Jan-13 12:24 11.2.0.3.0 YES
Host Name Platform CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
mwlsvtsitrac001 Linux x86 64-bit 4 1 1 7.80
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 3004 30-Jan-13 11:30:20 174 22.0
End Snap: 3005 30-Jan-13 12:30:26 174 19.5
Elapsed: 60.10 (mins)
DB Time: 19.01 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,520M 1,520M Std Block Size: 8K
Shared Pool Size: 2,432M 2,432M Log Buffer: 11,848K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 0.3 0.2 0.00 0.01
DB CPU(s): 0.2 0.1 0.00 0.01
Redo size: 222,766.5 108,751.8
Logical reads: 8,748.4 4,270.9
Block changes: 1,271.8 620.9
Physical reads: 59.7 29.2
Physical writes: 31.5 15.4
User calls: 30.6 15.0
Parses: 7.7 3.8
Hard parses: 0.3 0.1
W/A MB processed: 0.9 0.4
Logons: 0.2 0.1
Executes: 829.5 404.9
Rollbacks: 0.9 0.4
Transactions: 2.1
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.87 In-memory Sort %: 100.00
Library Hit %: 99.88 Soft Parse %: 96.16
Execute to Parse %: 99.07 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 65.31 % Non-Parse CPU: 98.54
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 76.15 76.17
% SQL with executions>1: 91.67 90.75
% Memory for SQL w/exec>1: 89.22 89.27
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
DB CPU 723 63.4
db file sequential read 25,738 192 7 16.9 User I/O
direct path read 5,098 119 23 10.4 User I/O
2. gc cr block busy 3,282 19 6 1.6 Cluster
control file sequential read 7,310 15 2 1.3 System I/O
3. Host CPU (CPUs: 4 Cores: 1 Sockets: 1)
~~~~~~~~ Load Average
Begin End %User %System %WIO %Idle
--------- --------- --------- --------- --------- ---------
1.20 1.20 7.9 1.7 2.3 90.2
Instance CPU
~~~~~~~~~~~~
% of total CPU for Instance: 6.7
% of busy CPU for Instance: 68.9
%DB time waiting for CPU - Resource Mgr: 0.0
Memory Statistics
~~~~~~~~~~~~~~~~~ Begin End
Host Mem (MB): 7,982.9 7,982.9
SGA use (MB): 4,096.0 4,096.0
PGA use (MB): 1,306.7 1,261.4
% Host Mem used for SGA+PGA: 67.68 67.11
4. RAC Statistics DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
Begin End
----- -----
Number of Instances: 2 2
Global Cache Load Profile
~~~~~~~~~~~~~~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Global Cache blocks received: 5.84 2.85
Global Cache blocks served: 7.02 3.43
GCS/GES messages received: 27.02 13.19
GCS/GES messages sent: 39.11 19.09
DBWR Fusion writes: 0.42 0.20
Estd Interconnect traffic (KB) 115.79
Global Cache Efficiency Percentages (Target local+remote 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer access - local cache %: 99.80
Buffer access - remote cache %: 0.07
Buffer access - disk %: 0.13
Global Cache and Enqueue Services - Workload Characteristics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg global enqueue get time (ms): 0.0
Avg global cache cr block receive time (ms): 3.7
Avg global cache current block receive time (ms): 1.1
Avg global cache cr block build time (ms): 0.0
Avg global cache cr block send time (ms): 0.0
Global cache log flushes for cr blocks served %: 3.2
Avg global cache cr block flush time (ms): 5.5
Avg global cache current block pin time (ms): 0.0
Avg global cache current block send time (ms): 0.0
Global cache log flushes for current blocks served %: 0.4
Avg global cache current block flush time (ms): 5.6
Global Cache and Enqueue Services - Messaging Statistics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg message sent queue time (ms): 0.1
Avg message sent queue time on ksxp (ms): 1.1
Avg message received queue time (ms): 0.0
Avg GCS message process time (ms): 0.0
Avg GES message process time (ms): 0.0
% of direct sent messages: 40.98
% of indirect sent messages: 58.69
% of flow controlled messages: 0.33
-------------------------------------------------------------
Cluster Interconnect
-> if IP/Public/Source at End snap is different a '*' is displayed
~~~~~~~~~~~~~~~~~~~~
Begin End
-------------------------------------------------- -----------
Interface IP Address Pub Source IP Pub Src
---------- --------------- --- ------------------------------ --- --- ---
eth1:1 169.254.175.104 N
5. Time Model Statistics DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> Total time in database user-calls (DB Time): 1140.5s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 1,080.4 94.7
DB CPU 722.6 63.4
RMAN cpu time (backup/restore) 169.3 14.8
PL/SQL execution elapsed time 106.2 9.3
inbound PL/SQL rpc elapsed time 89.4 7.8
parse time elapsed 71.1 6.2
hard parse elapsed time 66.0 5.8
PL/SQL compilation elapsed time 4.2 .4
connection management call elapsed time 2.8 .2
hard parse (sharing criteria) elapsed time 2.3 .2
hard parse (bind mismatch) elapsed time 0.5 .0
failed parse elapsed time 0.4 .0
Java execution elapsed time 0.3 .0
sequence load elapsed time 0.1 .0
repeated bind elapsed time 0.1 .0
DB time 1,140.5
background elapsed time 466.8
background cpu time 244.4
-------------------------------------------------------------
Operating System Statistics DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> *TIME statistic values are diffed.
All others display actual values. End Value is displayed if different
-> ordered by statistic type (CPU Use, Virtual Memory, Hardware Config), Name
Statistic Value End Value
------------------------- ---------------------- ----------------
BUSY_TIME 140,257
IDLE_TIME 1,297,990
IOWAIT_TIME 32,794
NICE_TIME 0
SYS_TIME 24,159
USER_TIME 113,040
LOAD 1
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 73,728
VM_OUT_BYTES 1,024
PHYSICAL_MEMORY_BYTES 8,370,647,040
NUM_CPUS 4
NUM_CPU_CORES 1
NUM_CPU_SOCKETS 1
GLOBAL_RECEIVE_SIZE_MAX 4,194,304
GLOBAL_SEND_SIZE_MAX 1,048,576
TCP_RECEIVE_SIZE_DEFAULT 87,380
TCP_RECEIVE_SIZE_MAX 4,194,304
TCP_RECEIVE_SIZE_MIN 4,096
TCP_SEND_SIZE_DEFAULT 16,384
TCP_SEND_SIZE_MAX 4,194,304
TCP_SEND_SIZE_MIN 4,096
-------------------------------------------------------------
Operating System Statistics - Detail DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
Snap Time Load %busy %user %sys %idle %iowait
--------------- -------- -------- -------- -------- -------- --------
30-Jan 11:30:20 1.2 N/A N/A N/A N/A N/A
6. 30-Jan 12:30:26 1.2 9.8 7.9 1.7 90.2 2.3
-------------------------------------------------------------
Foreground Wait Class DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> s - second, ms - millisecond - 1000th of a second
-> ordered by wait time desc, waits desc
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
-> Captured Time accounts for 101.4% of Total DB time 1,140.48 (s)
-> Total FG Wait Time: 433.35 (s) DB CPU time: 722.57 (s)
Avg
%Time Total Wait wait
Wait Class Waits -outs Time (s) (ms) %DB time
-------------------- ---------------- ----- ---------------- -------- ---------
DB CPU 723 63.4
User I/O 85,415 0 322 4 28.2
Cluster 36,962 0 55 1 4.8
Other 7,223 48 19 3 1.7
System I/O 7,540 0 17 2 1.4
Commit 3,448 0 13 4 1.2
Concurrency 4,990 0 5 1 0.5
Network 111,078 0 1 0 0.1
Application 1,355 5 1 1 0.1
Configuration 29 10 0 5 0.0
Administrative 1 0 0 132 0.0
-------------------------------------------------------------
7. Foreground Wait Events DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> s - second, ms - millisecond - 1000th of a second
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by wait time desc, waits desc (idle events last)
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % DB
Event Waits -outs Time (s) (ms) /txn time
-------------------------- ------------ ----- ---------- ------- -------- ------
db file sequential read 25,738 0 192 7 3.5 16.9
direct path read 5,098 0 119 23 0.7 10.4
gc cr block busy 3,282 0 19 6 0.4 1.6
control file sequential re 7,310 0 15 2 1.0 1.3
log file sync 3,448 0 13 4 0.5 1.2
gc current grant busy 7,175 0 10 1 1.0 .9
enq: WL - contention 1 0 10 9618 0.0 .8
gc current block 2-way 10,073 0 9 1 1.4 .8
db file scattered read 776 0 6 8 0.1 .6
gc cr grant 2-way 6,892 0 6 1 0.9 .5
gc current grant 2-way 5,729 0 5 1 0.8 .4
PX Deq: Slave Session Stat 357 0 3 10 0.0 .3
row cache lock 3,433 0 3 1 0.5 .3
gc cr multi block request 1,080 0 3 3 0.1 .2
utl_file I/O 51,529 0 2 0 7.0 .2
gc cr block 2-way 2,037 0 2 1 0.3 .2
recovery area: computing o 411 0 2 5 0.1 .2
IPC send completion sync 806 0 1 2 0.1 .1
library cache pin 957 0 1 1 0.1 .1
control file single write 146 0 1 7 0.0 .1
SQL*Net more data to clien 12,852 0 1 0 1.7 .1
library cache lock 570 0 1 1 0.1 .1
db file parallel read 68 0 1 9 0.0 .1
direct path write temp 185 0 1 3 0.0 .0
SQL*Net break/reset to cli 1,223 0 1 0 0.2 .0
gc current multi block req 373 0 0 1 0.1 .0
reliable message 557 0 0 1 0.1 .0
enq: PS - contention 251 4 0 2 0.0 .0
Disk file operations I/O 1,437 0 0 0 0.2 .0
direct path read temp 549 0 0 1 0.1 .0
name-service call wait 4 0 0 80 0.0 .0
ADR block file read 35 0 0 7 0.0 .0
DFS lock handle 125 9 0 2 0.0 .0
control file parallel writ 79 0 0 3 0.0 .0
enq: FB - contention 234 0 0 1 0.0 .0
enq: KO - fast object chec 30 0 0 6 0.0 .0
SQL*Net message to client 97,374 0 0 0 13.2 .0
gc buffer busy acquire 69 0 0 2 0.0 .0
recovery area: computing b 336 0 0 0 0.0 .0
enq: CF - contention 3 33 0 50 0.0 .0
enq: UL - contention 86 74 0 2 0.0 .0
switch logfile command 1 0 0 132 0.0 .0
os thread startup 1 0 0 127 0.0 .0
SQL*Net more data from cli 852 0 0 0 0.1 .0
gc cr disk read 49 0 0 2 0.0 .0
gc current block congested 77 0 0 1 0.0 .0
gc cr grant congested 61 0 0 1 0.0 .0
CSS initialization 16 0 0 5 0.0 .0
direct path write 35 0 0 2 0.0 .0
log file switch completion 1 0 0 61 0.0 .0
log buffer space 1 0 0 56 0.0 .0
PX Deq: Signal ACK RSG 110 0 0 0 0.0 .0
gc current block busy 12 0 0 4 0.0 .0
CSS operation: action 17 0 0 3 0.0 .0
18. Wait Event Histogram Detail (64 msec to 2 sec)DB/Inst: ERPSIT/ERPSIT1 Snaps:
-> Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
-> Units for % of Total Waits:
ms is milliseconds
s is 1024 milliseconds (approximately 1 second)
-> % of Total Waits: total waits for all wait classes, including Idle
-> % of Total Waits: value of .0 indicates value was <.05%;
value of null is truly 0
-> Ordered by Event (only non-idle events are displayed)
% of Total Waits
-----------------------------------------------
Waits
64ms
Event to 2s <32ms <64ms <1/8s <1/4s <1/2s <1s <2s >=2s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
ADR block file read 2 96.3 3.7
ASM file metadata operatio 46 97.7 .7 .7 .7 .0 .1 .0
DFS lock handle 1 99.7 .3
IPC send completion sync 1 99.9 .1
JS coord start wait 1 100.0
Log archive I/O 13 93.9 6.1
PX Deq: Slave Session Stat 4 98.9 .3 .3 .3 .3
RMAN backup & recovery I/O 59 67.0 17.3 10.1 5.6
SQL*Net break/reset to cli 1 99.9 .1
control file parallel writ 54 96.9 1.6 .9 .5
control file sequential re 341 98.2 1.0 .6 .2
control file single write 3 98.5 1.5
db file parallel write 14 99.6 .3 .1 .0
db file scattered read 38 95.1 3.9 .9 .1
db file sequential read 1086 95.9 2.4 1.4 .3 .0
direct path read 1108 78.3 12.6 7.1 2.0 .1
direct path write temp 2 98.9 1.1
enq: CF - contention 8 83.7 6.1 6.1 2.0 2.0
enq: CO - master slave det 1 99.9 .1
enq: KO - fast object chec 1 98.8 1.2
enq: TX - index contention 1 75.0 25.0
enq: WF - contention 1 92.3 7.7
gc cr block 2-way 3 99.9 .1 .0
gc cr block busy 53 98.4 1.0 .4 .2 .0
gc cr multi block request 3 99.7 .1 .1 .1
kfk: async disk IO 5 87.2 12.8
log buffer space 1 100.0
log file parallel write 100 98.3 1.2 .3 .2 .0
log file sequential read 111 86.6 8.9 3.4 .7 .4
log file switch completion 1 100.0
log file sync 34 99.0 .6 .2 .2
name-service call wait 5 16.7 16.7 66.7
os thread startup 39 97.4 2.6
recovery area: computing b 2 99.4 .6
recovery area: computing o 2 99.5 .5
reliable message 4 99.6 .1 .3
switch logfile command 1 100.0
utl_file I/O 7 100.0 .0 .0 .0
-------------------------------------------------------------
19. Wait Event Histogram Detail (4 sec to 2 min)DB/Inst: ERPSIT/ERPSIT1 Snaps: 3
-> Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
-> Units for % of Total Waits:
s is 1024 milliseconds (approximately 1 second)
m is 64*1024 milliseconds (approximately 67 seconds or 1.1 minutes)
-> % of Total Waits: total waits for all wait classes, including Idle
-> % of Total Waits: value of .0 indicates value was <.05%;
value of null is truly 0
-> Ordered by Event (only non-idle events are displayed)
% of Total Waits
-----------------------------------------------
Waits
4s
Event to 2m <2s <4s <8s <16s <32s < 1m < 2m >=2m
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
Streams AQ: qmn coordinato 9 18.2 81.8
enq: WL - contention 1 50.0 50.0
-------------------------------------------------------------
20. Wait Event Histogram Detail (4 min to 1 hr)DB/Inst: ERPSIT/ERPSIT1 Snaps: 30
No data exists for this section of the report.
-------------------------------------------------------------
21. Service Statistics DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> ordered by DB Time
Physical Logical
Service Name DB Time (s) DB CPU (s) Reads (K) Reads (K)
---------------------------- ------------ ------------ ------------ ------------
BAEBATCH 786 574 50 25,068
SYS$USERS 162 17 139 496
ERPSIT 122 78 6 1,139
BAEOLTP 71 53 20 4,751
SYS$BACKGROUND 0 0 1 97
-------------------------------------------------------------
Service Wait Class Stats DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
classes: User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in seconds
Service Name
----------------------------------------------------------------
User I/O User I/O Concurcy Concurcy Admin Admin Network Network
Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time
--------- --------- --------- --------- --------- --------- --------- ---------
BAEBATCH
73032 164 2355 2 0 0 15477 0
SYS$USERS
4378 117 109 0 1 0 2669 0
ERPSIT
5085 28 1620 2 0 0 79950 1
BAEOLTP
3046 12 907 1 0 0 13289 0
SYS$BACKGROUND
1135 6 15113 4 2 1 0 0
-------------------------------------------------------------
22. SQL ordered by Elapsed Time DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
-> %Total - Elapsed Time as a percentage of Total DB time
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 59.7% of Total DB Time (s): 1,140
-> Captured PL/SQL account for 60.0% of Total DB Time (s): 1,140
Elapsed Elapsed Time
Time (s) Executions per Exec (s) %Total %CPU %IO SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
498.7 1 498.74 43.7 79.9 17.2 6p9gw6kyg3rz3
Module: INCOIN
BEGIN INVPOPIF.inopinp_open_interface_process(:errbuf,:rc,:A0,:A1,:A2,:A3,:A4,:A
5,:A6,:A7); END;
132.1 1 132.12 11.6 94.5 3.6 a19g8cbxkd799
Module: WPMAPRPUSH
BEGIN HR_WPM_MASS_APR_PUSH.APPRAISAL_CP(:errbuf,:rc,:A0,:A1,:A2,:A3); END;
93.9 9,794 0.01 8.2 99.0 .0 3wr7w1zc19nkd
Module: INCOIN
select item_seq_num, item_seq_num, description, 'Y', to_char(to_date(null),'YYYY
/MM/DD HH24:MI:SS'), to_char(to_date(null),'YYYY/MM/DD HH24:MI:SS'), 'N', NULL f
rom BAE_INV_ITEM_SEQ_V where item_seq_num = :FND_BIND1
74.0 1 73.99 6.5 2.6 96.7 6zz4b4gvb94ta
Module: sqlplus@mwlsvtsitrac001 (TNS V1-V3)
select f.tablespace_name, decode(sign(z.mbytes - f.bytes),-1 ,f.bytes,z.mbyte
s ) bytes, m.next_extent from sys.dba_tablespaces t, (select tablespace
_name, max(next_extent) next_extent from dba_segments group by tablespac
e_name) m, (select tablespace_name, max(bytes) bytes from dba_free_sp
69.7 9,816 0.01 6.1 98.8 .0 cx2gy7kr23dbu
Module: e:SQLAP:frm:APXINWKB
SELECT r.error_message FROM fnd_flex_value_rules_vl r, fnd_flex_value_rule_usag
es u, fnd_flex_value_rule_lines l WHERE r.flex_value_set_id = :b_flex_value_set_
id AND u.application_id = :b_resp_application_id AND u.responsibility_id = :
b_responsibility_id AND u.flex_value_rule_id = r.flex_value_rule_id AND l.fl
52.2 10,000 0.01 4.6 94.1 4.7 d573r988g8kj4
Module: INCOIN
UPDATE MTL_SYSTEM_ITEMS_INTERFACE I SET ( I.LAST_UPDATED_BY, I.CREATED_BY, I.SUM
MARY_FLAG, I.ENABLED_FLAG, I.START_DATE_ACTIVE, I.END_DATE_ACTIVE, I.DESCRIPTION
, I.LONG_DESCRIPTION, I.BUYER_ID, I.ACCOUNTING_RULE_ID, I.INVOICING_RULE_ID, I.S
EGMENT1, I.SEGMENT2, I.SEGMENT3, I.SEGMENT4, I.SEGMENT5, I.SEGMENT6, I.SEGMENT7,
44.8 1 44.76 3.9 2.9 97.5 48v5dbr9up9dd
Module: sqlplus@mwlsvtsitrac001 (TNS V1-V3)
select substr(segment_name,1,30), substr(segment_type,1,10), substr(tables
pace_name,1,15), max_extents - extents from sys.dba_segments where (max_exten
ts < 3300 and (max_extents - extents) < 25) and tablespace_name not like '%TEM
P%' and tablespace_name not like 'UNDO%' and segment_type != 'ROLLBACK' an
40.7 9,794 0.00 3.6 28.6 67.4 2wza6u9v69rfb
Module: INCOIN
INSERT INTO MTL_ITEM_CATEGORIES ( INVENTORY_ITEM_ID, CATEGORY_SET_ID, CATEGORY_I
D, LAST_UPDATE_DATE, LAST_UPDATED_BY, CREATION_DATE, CREATED_BY, LAST_UPDATE_LOG
IN, PROGRAM_APPLICATION_ID, PROGRAM_ID, PROGRAM_UPDATE_DATE, REQUEST_ID, ORGANIZ
ATION_ID ) SELECT :B2 , S.CATEGORY_SET_ID, S.CATEGORY_ID, :B5 , :B9 , :B5 , :B9
23. 26.2 10,000 0.00 2.3 99.0 .0 1a95uwn37qyjf
Module: INCOIN
UPDATE /*+ index(MTL_ITEM_REVISIONS_INTERFACE, MTL_ITEM_REVS_INTERFACE_N3) */ MT
L_ITEM_REVISIONS_INTERFACE SET EFFECTIVITY_DATE = SYSDATE WHERE SET_PROCESS_ID =
:B2 AND PROCESS_FLAG = 1 AND REVISION = :B1 AND (EFFECTIVITY_DATE IS NULL OR EF
FECTIVITY_DATE > SYSDATE)
26.2 28 0.94 2.3 16.2 80.6 dq1d01n879a1z
Module: PAXACMPT
INSERT INTO PA_PROJECTS_FOR_ACCUM (PROJECT_ID, REQUEST_ID, ACTION_FLAG, SEGMENT1
24. SQL ordered by Elapsed Time DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
-> %Total - Elapsed Time as a percentage of Total DB time
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 59.7% of Total DB Time (s): 1,140
-> Captured PL/SQL account for 60.0% of Total DB Time (s): 1,140
Elapsed Elapsed Time
Time (s) Executions per Exec (s) %Total %CPU %IO SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
, EXCEPTION_FLAG) SELECT :B1 , :B3 , 'CM', :B2 , 'N' FROM DUAL WHERE PA_CHECK_CO
MMITMENTS.COMMITMENTS_CHANGED(:B1 ) = 'Y'
23.4 106 0.22 2.1 92.2 4.2 g567zkk66t5v5
Module: e:INV:frm:FNDRSRUN
SELECT /*+ */ CONCURRENT_PROGRAM_ID,PROGRAM_APPLICATION_ID,PRINTER,PROGRAM_SHO
RT_NAME,ARGUMENT_TEXT,PRINT_STYLE,USER_PRINT_STYLE,SAVE_OUTPUT_FLAG,ROW_ID,ACTUA
L_COMPLETION_DATE,COMPLETION_TEXT,PARENT_REQUEST_ID,REQUEST_TYPE,FCP_PRINTER,FCP
_PRINT_STYLE,FCP_REQUIRED_STYLE,LAST_UPDATE_DATE,LAST_UPDATED_BY,REQUESTED_BY,HA
20.3 1 20.35 1.8 14.6 83.4 2m0m4p19y6w35
Module: INCOIN
INSERT INTO MTL_SYSTEM_ITEMS_TL ( INVENTORY_ITEM_ID, ORGANIZATION_ID, LANGUAGE,
SOURCE_LANG, DESCRIPTION, LONG_DESCRIPTION, LAST_UPDATE_DATE, LAST_UPDATED_BY, C
REATION_DATE, CREATED_BY, LAST_UPDATE_LOGIN ) SELECT I.INVENTORY_ITEM_ID, I.ORGA
NIZATION_ID, L.LANGUAGE_CODE, DECODE(L.LANGUAGE_CODE, USERENV('LANG'), USERENV('
16.6 352 0.05 1.5 2.2 96.0 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
16.3 9,794 0.00 1.4 7.3 91.1 ac6g64j5ajjpq
Module: INCOIN
INSERT INTO EGO_ITEM_TEXT_TL ( ID_TYPE , ITEM_ID , ITEM_CODE , ORG_ID , LANGUAGE
, SOURCE_LANG , ITEM_CATALOG_GROUP_ID , INVENTORY_ITEM_ID , TEXT , CREATION_DAT
E , CREATED_BY , LAST_UPDATE_DATE , LAST_UPDATED_BY , LAST_UPDATE_LOGIN ) SELECT
:B10 , MSIK.INVENTORY_ITEM_ID , MSIK.CONCATENATED_SEGMENTS , MSIK.ORGANIZATION_
15.4 1 15.42 1.4 23.8 13.4 35b006uxwfmjx
Module: BAEINV_ITEM_IMP_R12
BEGIN BAEINV_IMP_ITEMS_R12_PKG.item_imp_r12_proc(:errbuf,:rc,:A0); END;
15.4 1 15.36 1.3 23.7 13.4 cgxyr0k1v5apg
Module: BAEINV_ITEM_IMP_R12
UPDATE BAECUST.BAEINV_ITEM_CONV_R12_STG STG SET STG.ITEM_STATUS = 'PROCESSED',LA
ST_UPDATE_DATE=SYSDATE WHERE STG.ITEM_STATUS = 'INTERFACED' AND NOT EXISTS (SELE
CT 0 FROM MTL_SYSTEM_ITEMS_INTERFACE MSI2 WHERE MSI2.TRANSACTION_ID = STG.SEQNUM
AND MSI2.PROCESS_FLAG = 7)
12.6 1 12.61 1.1 36.1 60.7 5b5hjgyghdyy2
Module: PAXACMPT
INSERT INTO PA_COMMITMENT_TXNS_TMP (PROJECT_ID, TASK_ID, TRANSACTION_SOURCE, LIN
E_TYPE, CMT_NUMBER, CMT_DISTRIBUTION_ID, CMT_HEADER_ID, DESCRIPTION, EXPENDITURE
_ITEM_DATE, PA_PERIOD, GL_PERIOD, CMT_LINE_NUMBER, CMT_CREATION_DATE, CMT_APPROV
ED_DATE, CMT_REQUESTOR_NAME, CMT_BUYER_NAME, CMT_APPROVED_FLAG, CMT_PROMISED_DAT
12.2 4,757 0.00 1.1 99.1 .1 bp16nssuvdxv7
Module: WPMAPRPUSH
SELECT APPRAISAL_ID, APPRAISAL_SYSTEM_STATUS FROM PER_APPRAISALS WHERE PLAN_ID =
:B5 AND APPRAISAL_PERIOD_START_DATE = :B4 AND APPRAISAL_PERIOD_END_DATE = :B3 A
25. ND APPRAISEE_PERSON_ID = :B2 AND APPRAISAL_SYSTEM_STATUS <> 'TRANSFER_OUT' AND A
PPRAISAL_TEMPLATE_ID = :B1
11.8 855,120 0.00 1.0 102.4 .0 3ghp86vmmra2x
Module: e:PER:frm:PERWSADR
SELECT EFFECTIVE_DATE FROM FND_SESSIONS WHERE SESSION_ID=USERENV('sessionid')
-------------------------------------------------------------
26. SQL ordered by CPU Time DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - CPU Time as a percentage of Total DB CPU
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 53.4% of Total CPU Time (s): 723
-> Captured PL/SQL account for 75.2% of Total CPU Time (s): 723
CPU CPU per Elapsed
Time (s) Executions Exec (s) %Total Time (s) %CPU %IO SQL Id
---------- ------------ ---------- ------ ---------- ------ ------ -------------
398.3 1 398.33 55.1 498.7 79.9 17.2 6p9gw6kyg3rz3
Module: INCOIN
BEGIN INVPOPIF.inopinp_open_interface_process(:errbuf,:rc,:A0,:A1,:A2,:A3,:A4,:A
5,:A6,:A7); END;
124.9 1 124.87 17.3 132.1 94.5 3.6 a19g8cbxkd799
Module: WPMAPRPUSH
BEGIN HR_WPM_MASS_APR_PUSH.APPRAISAL_CP(:errbuf,:rc,:A0,:A1,:A2,:A3); END;
92.9 9,794 0.01 12.9 93.9 99.0 .0 3wr7w1zc19nkd
Module: INCOIN
select item_seq_num, item_seq_num, description, 'Y', to_char(to_date(null),'YYYY
/MM/DD HH24:MI:SS'), to_char(to_date(null),'YYYY/MM/DD HH24:MI:SS'), 'N', NULL f
rom BAE_INV_ITEM_SEQ_V where item_seq_num = :FND_BIND1
68.9 9,816 0.01 9.5 69.7 98.8 .0 cx2gy7kr23dbu
Module: e:SQLAP:frm:APXINWKB
SELECT r.error_message FROM fnd_flex_value_rules_vl r, fnd_flex_value_rule_usag
es u, fnd_flex_value_rule_lines l WHERE r.flex_value_set_id = :b_flex_value_set_
id AND u.application_id = :b_resp_application_id AND u.responsibility_id = :
b_responsibility_id AND u.flex_value_rule_id = r.flex_value_rule_id AND l.fl
49.1 10,000 0.00 6.8 52.2 94.1 4.7 d573r988g8kj4
Module: INCOIN
UPDATE MTL_SYSTEM_ITEMS_INTERFACE I SET ( I.LAST_UPDATED_BY, I.CREATED_BY, I.SUM
MARY_FLAG, I.ENABLED_FLAG, I.START_DATE_ACTIVE, I.END_DATE_ACTIVE, I.DESCRIPTION
, I.LONG_DESCRIPTION, I.BUYER_ID, I.ACCOUNTING_RULE_ID, I.INVOICING_RULE_ID, I.S
EGMENT1, I.SEGMENT2, I.SEGMENT3, I.SEGMENT4, I.SEGMENT5, I.SEGMENT6, I.SEGMENT7,
26.0 10,000 0.00 3.6 26.2 99.0 .0 1a95uwn37qyjf
Module: INCOIN
UPDATE /*+ index(MTL_ITEM_REVISIONS_INTERFACE, MTL_ITEM_REVS_INTERFACE_N3) */ MT
L_ITEM_REVISIONS_INTERFACE SET EFFECTIVITY_DATE = SYSDATE WHERE SET_PROCESS_ID =
:B2 AND PROCESS_FLAG = 1 AND REVISION = :B1 AND (EFFECTIVITY_DATE IS NULL OR EF
FECTIVITY_DATE > SYSDATE)
21.6 106 0.20 3.0 23.4 92.2 4.2 g567zkk66t5v5
Module: e:INV:frm:FNDRSRUN
SELECT /*+ */ CONCURRENT_PROGRAM_ID,PROGRAM_APPLICATION_ID,PRINTER,PROGRAM_SHO
RT_NAME,ARGUMENT_TEXT,PRINT_STYLE,USER_PRINT_STYLE,SAVE_OUTPUT_FLAG,ROW_ID,ACTUA
L_COMPLETION_DATE,COMPLETION_TEXT,PARENT_REQUEST_ID,REQUEST_TYPE,FCP_PRINTER,FCP
_PRINT_STYLE,FCP_REQUIRED_STYLE,LAST_UPDATE_DATE,LAST_UPDATED_BY,REQUESTED_BY,HA
12.1 855,120 0.00 1.7 11.8 102.4 .0 3ghp86vmmra2x
Module: e:PER:frm:PERWSADR
SELECT EFFECTIVE_DATE FROM FND_SESSIONS WHERE SESSION_ID=USERENV('sessionid')
12.1 4,757 0.00 1.7 12.2 99.1 .1 bp16nssuvdxv7
Module: WPMAPRPUSH
SELECT APPRAISAL_ID, APPRAISAL_SYSTEM_STATUS FROM PER_APPRAISALS WHERE PLAN_ID =
:B5 AND APPRAISAL_PERIOD_START_DATE = :B4 AND APPRAISAL_PERIOD_END_DATE = :B3 A
ND APPRAISEE_PERSON_ID = :B2 AND APPRAISAL_SYSTEM_STATUS <> 'TRANSFER_OUT' AND A
28. SQL ordered by CPU Time DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - CPU Time as a percentage of Total DB CPU
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 53.4% of Total CPU Time (s): 723
-> Captured PL/SQL account for 75.2% of Total CPU Time (s): 723
CPU CPU per Elapsed
Time (s) Executions Exec (s) %Total Time (s) %CPU %IO SQL Id
---------- ------------ ---------- ------ ---------- ------ ------ -------------
10.7 1 10.65 1.5 10.7 99.4 .1 74dx9nqnt12ur
Module: TOAD background query session
SELECT --DECODE(papf.current_employee_flag, 'Y', papf.employee_number, --
pv.segment1) employee_supplier_number, --
DECODE(papf.current_employee_flag, 'Y', papf.full_name, --
pv.vendor_name) employee_supplier_name, TO_CHAR(
-------------------------------------------------------------
29. SQL ordered by User I/O Wait Time DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - User I/O Time as a percentage of Total User I/O Wait time
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 82.0% of Total User I/O Wait Time (s):
-> Captured PL/SQL account for 33.0% of Total User I/O Wait Time (s):
User I/O UIO per Elapsed
Time (s) Executions Exec (s) %Total Time (s) %CPU %IO SQL Id
---------- ------------ ---------- ------ ---------- ------ ------ -------------
85.7 1 85.71 26.2 498.7 79.9 17.2 6p9gw6kyg3rz3
Module: INCOIN
BEGIN INVPOPIF.inopinp_open_interface_process(:errbuf,:rc,:A0,:A1,:A2,:A3,:A4,:A
5,:A6,:A7); END;
71.5 1 71.52 21.8 74.0 2.6 96.7 6zz4b4gvb94ta
Module: sqlplus@mwlsvtsitrac001 (TNS V1-V3)
select f.tablespace_name, decode(sign(z.mbytes - f.bytes),-1 ,f.bytes,z.mbyte
s ) bytes, m.next_extent from sys.dba_tablespaces t, (select tablespace
_name, max(next_extent) next_extent from dba_segments group by tablespac
e_name) m, (select tablespace_name, max(bytes) bytes from dba_free_sp
43.6 1 43.62 13.3 44.8 2.9 97.5 48v5dbr9up9dd
Module: sqlplus@mwlsvtsitrac001 (TNS V1-V3)
select substr(segment_name,1,30), substr(segment_type,1,10), substr(tables
pace_name,1,15), max_extents - extents from sys.dba_segments where (max_exten
ts < 3300 and (max_extents - extents) < 25) and tablespace_name not like '%TEM
P%' and tablespace_name not like 'UNDO%' and segment_type != 'ROLLBACK' an
27.4 9,794 0.00 8.4 40.7 28.6 67.4 2wza6u9v69rfb
Module: INCOIN
INSERT INTO MTL_ITEM_CATEGORIES ( INVENTORY_ITEM_ID, CATEGORY_SET_ID, CATEGORY_I
D, LAST_UPDATE_DATE, LAST_UPDATED_BY, CREATION_DATE, CREATED_BY, LAST_UPDATE_LOG
IN, PROGRAM_APPLICATION_ID, PROGRAM_ID, PROGRAM_UPDATE_DATE, REQUEST_ID, ORGANIZ
ATION_ID ) SELECT :B2 , S.CATEGORY_SET_ID, S.CATEGORY_ID, :B5 , :B9 , :B5 , :B9
21.1 28 0.75 6.4 26.2 16.2 80.6 dq1d01n879a1z
Module: PAXACMPT
INSERT INTO PA_PROJECTS_FOR_ACCUM (PROJECT_ID, REQUEST_ID, ACTION_FLAG, SEGMENT1
, EXCEPTION_FLAG) SELECT :B1 , :B3 , 'CM', :B2 , 'N' FROM DUAL WHERE PA_CHECK_CO
MMITMENTS.COMMITMENTS_CHANGED(:B1 ) = 'Y'
17.0 1 16.97 5.2 20.3 14.6 83.4 2m0m4p19y6w35
Module: INCOIN
INSERT INTO MTL_SYSTEM_ITEMS_TL ( INVENTORY_ITEM_ID, ORGANIZATION_ID, LANGUAGE,
SOURCE_LANG, DESCRIPTION, LONG_DESCRIPTION, LAST_UPDATE_DATE, LAST_UPDATED_BY, C
REATION_DATE, CREATED_BY, LAST_UPDATE_LOGIN ) SELECT I.INVENTORY_ITEM_ID, I.ORGA
NIZATION_ID, L.LANGUAGE_CODE, DECODE(L.LANGUAGE_CODE, USERENV('LANG'), USERENV('
16.0 352 0.05 4.9 16.6 2.2 96.0 cvn54b7yz0s8u
select /*+ index(idl_ub1$ i_idl_ub11) +*/ piece#,length,piece from idl_ub1$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
14.9 9,794 0.00 4.5 16.3 7.3 91.1 ac6g64j5ajjpq
Module: INCOIN
INSERT INTO EGO_ITEM_TEXT_TL ( ID_TYPE , ITEM_ID , ITEM_CODE , ORG_ID , LANGUAGE
, SOURCE_LANG , ITEM_CATALOG_GROUP_ID , INVENTORY_ITEM_ID , TEXT , CREATION_DAT
E , CREATED_BY , LAST_UPDATE_DATE , LAST_UPDATED_BY , LAST_UPDATE_LOGIN ) SELECT
:B10 , MSIK.INVENTORY_ITEM_ID , MSIK.CONCATENATED_SEGMENTS , MSIK.ORGANIZATION_
8.4 361 0.02 2.6 9.0 3.2 93.5 3ktacv9r56b51
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(proper
30. ty,0),subname,type#,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and p_obj
#=obj#(+) order by order#
7.7 1 7.65 2.3 12.6 36.1 60.7 5b5hjgyghdyy2
Module: PAXACMPT
INSERT INTO PA_COMMITMENT_TXNS_TMP (PROJECT_ID, TASK_ID, TRANSACTION_SOURCE, LIN
E_TYPE, CMT_NUMBER, CMT_DISTRIBUTION_ID, CMT_HEADER_ID, DESCRIPTION, EXPENDITURE
_ITEM_DATE, PA_PERIOD, GL_PERIOD, CMT_LINE_NUMBER, CMT_CREATION_DATE, CMT_APPROV
31. SQL ordered by User I/O Wait Time DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - User I/O Time as a percentage of Total User I/O Wait time
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 82.0% of Total User I/O Wait Time (s):
-> Captured PL/SQL account for 33.0% of Total User I/O Wait Time (s):
User I/O UIO per Elapsed
Time (s) Executions Exec (s) %Total Time (s) %CPU %IO SQL Id
---------- ------------ ---------- ------ ---------- ------ ------ -------------
ED_DATE, CMT_REQUESTOR_NAME, CMT_BUYER_NAME, CMT_APPROVED_FLAG, CMT_PROMISED_DAT
7.0 352 0.02 2.1 7.4 2.7 94.9 ga9j9xk5cy9s0
select /*+ index(idl_sb4$ i_idl_sb41) +*/ piece#,length,piece from idl_sb4$ wher
e obj#=:1 and part=:2 and version=:3 order by piece#
6.1 1 6.10 1.9 8.6 18.7 70.8 asasm6bd9bb3q
Module: e:PO:bes:oracle.apps.fnd.user.name.validate
BEGIN POS_SUPPLIER_USER_REG_PKG.approve(:1, :2, :3, :4); END;
5.9 1 5.94 1.8 8.3 14.2 71.3 amvph0gymr9dn
Module: INCOIN
INSERT INTO MTL_PENDING_ITEM_STATUS ( INVENTORY_ITEM_ID, ORGANIZATION_ID, STATUS
_CODE, EFFECTIVE_DATE, PENDING_FLAG, LAST_UPDATE_DATE, LAST_UPDATED_BY, CREATION
_DATE, CREATED_BY, IMPLEMENTED_DATE, LIFECYCLE_ID, PHASE_ID ) SELECT I.INVENTORY
_ITEM_ID, I.ORGANIZATION_ID, I.INVENTORY_ITEM_STATUS_CODE, :B4 , :B6 , :B4 , :B5
5.6 1 5.57 1.7 10.4 36.8 53.6 c3c3a0h85quaf
Module: INCOIN
BEGIN ENI_ITEMS_STAR_PKG.Sync_
Star_Items_From_IOI (
p_api_version => :p_api_version , p_init_msg_list =>
:p_init_msg_list , p_set_process_id => :p_set_process_id , x_
5.4 1 5.44 1.7 10.3 37.2 53.1 cjws1uu98746f
Module: INCOIN
MERGE INTO eni_oltp_item_star STAR USING (SELECT item.inventory_item_id inve
ntory_item_id, item.organization_id organization_id,
item.CONCATENATED_SEGMENTS|| ' (' || mtp.organization_code || ')'value,
decode(item.organization_id,mtp.master_organization_id,null,
5.3 1 5.35 1.6 6.6 16.6 81.2 9z8hcsgn2yzcr
Module: PAXACMPT
INSERT INTO PA_PROJECTS_FOR_ACCUM (PROJECT_ID, REQUEST_ID, ACTION_FLAG, SEGMENT1
, EXCEPTION_FLAG) SELECT PROJ.PROJECT_ID, :B5 REQUEST_ID, 'CS' ACTION_FLAG, PROJ
.SEGMENT1, 'N' FROM PA_PROJECTS_FOR_ACCUM_V PROJ WHERE PROJ.SEGMENT1 BETWEEN :B4
AND :B3 AND PROJ.PROJECT_TYPE = NVL(:B2 , PROJECT_TYPE) AND :B1 = 'Y' AND EXIST
4.8 1 4.76 1.5 132.1 94.5 3.6 a19g8cbxkd799
Module: WPMAPRPUSH
BEGIN HR_WPM_MASS_APR_PUSH.APPRAISAL_CP(:errbuf,:rc,:A0,:A1,:A2,:A3); END;
3.8 1 3.85 1.2 8.2 20.7 47.2 3wb45aaxhqd4v
Module: e:INV:bes:oracle.apps.ego.item.postRevisionChang
BEGIN ICX_CAT_POPULATE_MI_GRP.
populateBulkItemChange( P_API_VERSION => 1.0
,P_COMMIT => :l_commit ,P_INIT_MSG_LIST
=> NULL ,P_VALIDATION_LEVEL => NULL
3.8 1 3.85 1.2 8.1 20.7 47.2 9s5bunf40djkk
Module: e:INV:bes:oracle.apps.ego.item.postRevisionChang
SELECT /*+ LEADING(doc) */ DOC.*, NVL(IC1.RT_CATEGORY_ID, -2) IP_CATEGORY_ID, IC
33. SQL ordered by User I/O Wait Time DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - User I/O Time as a percentage of Total User I/O Wait time
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 82.0% of Total User I/O Wait Time (s):
-> Captured PL/SQL account for 33.0% of Total User I/O Wait Time (s):
User I/O UIO per Elapsed
Time (s) Executions Exec (s) %Total Time (s) %CPU %IO SQL Id
---------- ------------ ---------- ------ ---------- ------ ------ -------------
AG, ENABLED_FLAG, DESCRIPTION, BUYER_ID, ACCOUNTING_RULE_ID, INVOICING_RULE_ID,
SEGMENT1, SEGMENT2, SEGMENT3, SEGMENT4, SEGMENT5, SEGMENT6, SEGMENT7, SEGMENT8,
-------------------------------------------------------------
34. SQL ordered by Gets DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 31,548,931
-> Captured SQL account for 61.9% of Total
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
20,003,252 1 2.000325E+07 63.4 498.7 79.9 17.2 6p9gw6kyg3rz3
Module: INCOIN
BEGIN INVPOPIF.inopinp_open_interface_process(:errbuf,:rc,:A0,:A1,:A2,:A3,:A4,:A
5,:A6,:A7); END;
4,867,618 9,794 497.0 15.4 93.9 99 0 3wr7w1zc19nkd
Module: INCOIN
select item_seq_num, item_seq_num, description, 'Y', to_char(to_date(null),'YYYY
/MM/DD HH24:MI:SS'), to_char(to_date(null),'YYYY/MM/DD HH24:MI:SS'), 'N', NULL f
rom BAE_INV_ITEM_SEQ_V where item_seq_num = :FND_BIND1
4,476,065 106 42,227.0 14.2 23.4 92.2 4.2 g567zkk66t5v5
Module: e:INV:frm:FNDRSRUN
SELECT /*+ */ CONCURRENT_PROGRAM_ID,PROGRAM_APPLICATION_ID,PRINTER,PROGRAM_SHO
RT_NAME,ARGUMENT_TEXT,PRINT_STYLE,USER_PRINT_STYLE,SAVE_OUTPUT_FLAG,ROW_ID,ACTUA
L_COMPLETION_DATE,COMPLETION_TEXT,PARENT_REQUEST_ID,REQUEST_TYPE,FCP_PRINTER,FCP
_PRINT_STYLE,FCP_REQUIRED_STYLE,LAST_UPDATE_DATE,LAST_UPDATED_BY,REQUESTED_BY,HA
4,414,255 1 4,414,255.0 14.0 132.1 94.5 3.6 a19g8cbxkd799
Module: WPMAPRPUSH
BEGIN HR_WPM_MASS_APR_PUSH.APPRAISAL_CP(:errbuf,:rc,:A0,:A1,:A2,:A3); END;
2,002,497 9,816 204.0 6.3 69.7 98.8 0 cx2gy7kr23dbu
Module: e:SQLAP:frm:APXINWKB
SELECT r.error_message FROM fnd_flex_value_rules_vl r, fnd_flex_value_rule_usag
es u, fnd_flex_value_rule_lines l WHERE r.flex_value_set_id = :b_flex_value_set_
id AND u.application_id = :b_resp_application_id AND u.responsibility_id = :
b_responsibility_id AND u.flex_value_rule_id = r.flex_value_rule_id AND l.fl
1,710,240 855,120 2.0 5.4 11.8 102.4 0 3ghp86vmmra2x
Module: e:PER:frm:PERWSADR
SELECT EFFECTIVE_DATE FROM FND_SESSIONS WHERE SESSION_ID=USERENV('sessionid')
1,005,148 10,000 100.5 3.2 26.2 99 0 1a95uwn37qyjf
Module: INCOIN
UPDATE /*+ index(MTL_ITEM_REVISIONS_INTERFACE, MTL_ITEM_REVS_INTERFACE_N3) */ MT
L_ITEM_REVISIONS_INTERFACE SET EFFECTIVITY_DATE = SYSDATE WHERE SET_PROCESS_ID =
:B2 AND PROCESS_FLAG = 1 AND REVISION = :B1 AND (EFFECTIVITY_DATE IS NULL OR EF
FECTIVITY_DATE > SYSDATE)
923,821 9,794 94.3 2.9 40.7 28.6 67.4 2wza6u9v69rfb
Module: INCOIN
INSERT INTO MTL_ITEM_CATEGORIES ( INVENTORY_ITEM_ID, CATEGORY_SET_ID, CATEGORY_I
D, LAST_UPDATE_DATE, LAST_UPDATED_BY, CREATION_DATE, CREATED_BY, LAST_UPDATE_LOG
IN, PROGRAM_APPLICATION_ID, PROGRAM_ID, PROGRAM_UPDATE_DATE, REQUEST_ID, ORGANIZ
ATION_ID ) SELECT :B2 , S.CATEGORY_SET_ID, S.CATEGORY_ID, :B5 , :B9 , :B5 , :B9
719,445 4,757 151.2 2.3 12.2 99.1 .1 bp16nssuvdxv7
Module: WPMAPRPUSH
SELECT APPRAISAL_ID, APPRAISAL_SYSTEM_STATUS FROM PER_APPRAISALS WHERE PLAN_ID =
:B5 AND APPRAISAL_PERIOD_START_DATE = :B4 AND APPRAISAL_PERIOD_END_DATE = :B3 A
ND APPRAISEE_PERSON_ID = :B2 AND APPRAISAL_SYSTEM_STATUS <> 'TRANSFER_OUT' AND A
36. SQL ordered by Gets DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 31,548,931
-> Captured SQL account for 61.9% of Total
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
376,629 1 376,629.0 1.2 10.4 36.8 53.6 c3c3a0h85quaf
Module: INCOIN
BEGIN ENI_ITEMS_STAR_PKG.Sync_
Star_Items_From_IOI (
p_api_version => :p_api_version , p_init_msg_list =>
:p_init_msg_list , p_set_process_id => :p_set_process_id , x_
376,597 1 376,597.0 1.2 10.3 37.2 53.1 cjws1uu98746f
Module: INCOIN
MERGE INTO eni_oltp_item_star STAR USING (SELECT item.inventory_item_id inve
ntory_item_id, item.organization_id organization_id,
item.CONCATENATED_SEGMENTS|| ' (' || mtp.organization_code || ')'value,
decode(item.organization_id,mtp.master_organization_id,null,
-------------------------------------------------------------
37. SQL ordered by Reads DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> %Total - Physical Reads as a percentage of Total Disk Reads
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Disk Reads: 215,401
-> Captured SQL account for 85.9% of Total
Physical Reads Elapsed
Reads Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ---------- ------ ---------- ------ ------ -------------
69,967 1 69,967.0 32.5 74.0 2.6 96.7 6zz4b4gvb94ta
Module: sqlplus@mwlsvtsitrac001 (TNS V1-V3)
select f.tablespace_name, decode(sign(z.mbytes - f.bytes),-1 ,f.bytes,z.mbyte
s ) bytes, m.next_extent from sys.dba_tablespaces t, (select tablespace
_name, max(next_extent) next_extent from dba_segments group by tablespac
e_name) m, (select tablespace_name, max(bytes) bytes from dba_free_sp
69,424 1 69,424.0 32.2 44.8 2.9 97.5 48v5dbr9up9dd
Module: sqlplus@mwlsvtsitrac001 (TNS V1-V3)
select substr(segment_name,1,30), substr(segment_type,1,10), substr(tables
pace_name,1,15), max_extents - extents from sys.dba_segments where (max_exten
ts < 3300 and (max_extents - extents) < 25) and tablespace_name not like '%TEM
P%' and tablespace_name not like 'UNDO%' and segment_type != 'ROLLBACK' an
38,521 1 38,521.0 17.9 498.7 79.9 17.2 6p9gw6kyg3rz3
Module: INCOIN
BEGIN INVPOPIF.inopinp_open_interface_process(:errbuf,:rc,:A0,:A1,:A2,:A3,:A4,:A
5,:A6,:A7); END;
21,996 1 21,996.0 10.2 8.2 20.7 47.2 3wb45aaxhqd4v
Module: e:INV:bes:oracle.apps.ego.item.postRevisionChang
BEGIN ICX_CAT_POPULATE_MI_GRP.
populateBulkItemChange( P_API_VERSION => 1.0
,P_COMMIT => :l_commit ,P_INIT_MSG_LIST
=> NULL ,P_VALIDATION_LEVEL => NULL
21,996 1 21,996.0 10.2 8.1 20.7 47.2 9s5bunf40djkk
Module: e:INV:bes:oracle.apps.ego.item.postRevisionChang
SELECT /*+ LEADING(doc) */ DOC.*, NVL(IC1.RT_CATEGORY_ID, -2) IP_CATEGORY_ID, IC
1.CATEGORY_NAME IP_CATEGORY_NAME, CTX.INVENTORY_ITEM_ID CTX_INVENTORY_ITEM_ID, C
TX.SOURCE_TYPE CTX_SOURCE_TYPE, CTX.ITEM_TYPE CTX_ITEM_TYPE, CTX.PURCHASING_ORG_
ID CTX_PURCHASING_ORG_ID, CTX.SUPPLIER_ID CTX_SUPPLIER_ID, CTX.SUPPLIER_SITE_ID
7,324 1 7,324.0 3.4 15.4 23.8 13.4 35b006uxwfmjx
Module: BAEINV_ITEM_IMP_R12
BEGIN BAEINV_IMP_ITEMS_R12_PKG.item_imp_r12_proc(:errbuf,:rc,:A0); END;
7,322 1 7,322.0 3.4 15.4 23.7 13.4 cgxyr0k1v5apg
Module: BAEINV_ITEM_IMP_R12
UPDATE BAECUST.BAEINV_ITEM_CONV_R12_STG STG SET STG.ITEM_STATUS = 'PROCESSED',LA
ST_UPDATE_DATE=SYSDATE WHERE STG.ITEM_STATUS = 'INTERFACED' AND NOT EXISTS (SELE
CT 0 FROM MTL_SYSTEM_ITEMS_INTERFACE MSI2 WHERE MSI2.TRANSACTION_ID = STG.SEQNUM
AND MSI2.PROCESS_FLAG = 7)
4,098 9,794 0.4 1.9 40.7 28.6 67.4 2wza6u9v69rfb
Module: INCOIN
INSERT INTO MTL_ITEM_CATEGORIES ( INVENTORY_ITEM_ID, CATEGORY_SET_ID, CATEGORY_I
D, LAST_UPDATE_DATE, LAST_UPDATED_BY, CREATION_DATE, CREATED_BY, LAST_UPDATE_LOG
IN, PROGRAM_APPLICATION_ID, PROGRAM_ID, PROGRAM_UPDATE_DATE, REQUEST_ID, ORGANIZ
ATION_ID ) SELECT :B2 , S.CATEGORY_SET_ID, S.CATEGORY_ID, :B5 , :B9 , :B5 , :B9
2,609 1 2,609.0 1.2 8.3 14.2 71.3 amvph0gymr9dn
Module: INCOIN
INSERT INTO MTL_PENDING_ITEM_STATUS ( INVENTORY_ITEM_ID, ORGANIZATION_ID, STATUS
39. SQL ordered by Reads DB/Inst: ERPSIT/ERPSIT1 Snaps: 3004-3005
-> %Total - Physical Reads as a percentage of Total Disk Reads
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Disk Reads: 215,401
-> Captured SQL account for 85.9% of Total
Physical Reads Elapsed
Reads Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ---------- ------ ---------- ------ ------ -------------
BEGIN ENI_ITEMS_STAR_PKG.Sync_
Star_Items_From_IOI (
p_api_version => :p_api_version , p_init_msg_list =>
:p_init_msg_list , p_set_process_id => :p_set_process_id , x_
-------------------------------------------------------------
40. SQL ordered by Physical Reads (UnOptimized)DB/Inst: ERPSIT/ERPSIT1 Snaps: 30
-> UnOptimized Read Reqs = Physical Read Reqts - Optimized Read Reqs
-> %Opt - Optimized Reads as percentage of SQL Read Requests
-> %Total - UnOptimized Read Reqs as a percentage of Total UnOptimized Read Reqs
-> Total Physical Read Requests: 35,816
-> Captured SQL account for 61.9% of Total
-> Total UnOptimized Read Requests: 35,816
-> Captured SQL account for 61.9% of Total
-> Total Optimized Read Requests: 1
-> Captured SQL account for 0.0% of Total
UnOptimized Physical UnOptimized
Read Reqs Read Reqs Executions Reqs per Exe %Opt %Total SQL Id
----------- ----------- ---------- ------------ ------ ------ -------------
17,864 17,864 1 17,864.0 0.0 49.9 6p9gw6kyg3rz3
Module: INCOIN
BEGIN INVPOPIF.inopinp_open_interface_process(:errbuf,:rc,:A0,:A1,:A2,:A3,:A4,:A
5,:A6,:A7); END;
4,098 4,098 9,794 0.4 0.0 11.4 2wza6u9v69rfb
Module: INCOIN
INSERT INTO MTL_ITEM_CATEGORIES ( INVENTORY_ITEM_ID, CATEGORY_SET_ID, CATEGORY_I
D, LAST_UPDATE_DATE, LAST_UPDATED_BY, CREATION_DATE, CREATED_BY, LAST_UPDATE_LOG
IN, PROGRAM_APPLICATION_ID, PROGRAM_ID, PROGRAM_UPDATE_DATE, REQUEST_ID, ORGANIZ
ATION_ID ) SELECT :B2 , S.CATEGORY_SET_ID, S.CATEGORY_ID, :B5 , :B9 , :B5 , :B9
3,064 3,064 1 3,064.0 0.0 8.6 6zz4b4gvb94ta
Module: sqlplus@mwlsvtsitrac001 (TNS V1-V3)
select f.tablespace_name, decode(sign(z.mbytes - f.bytes),-1 ,f.bytes,z.mbyte
s ) bytes, m.next_extent from sys.dba_tablespaces t, (select tablespace
_name, max(next_extent) next_extent from dba_segments group by tablespac
e_name) m, (select tablespace_name, max(bytes) bytes from dba_free_sp
2,852 2,852 1 2,852.0 0.0 8.0 48v5dbr9up9dd
Module: sqlplus@mwlsvtsitrac001 (TNS V1-V3)
select substr(segment_name,1,30), substr(segment_type,1,10), substr(tables
pace_name,1,15), max_extents - extents from sys.dba_segments where (max_exten
ts < 3300 and (max_extents - extents) < 25) and tablespace_name not like '%TEM
P%' and tablespace_name not like 'UNDO%' and segment_type != 'ROLLBACK' an
2,609 2,609 1 2,609.0 0.0 7.3 amvph0gymr9dn
Module: INCOIN
INSERT INTO MTL_PENDING_ITEM_STATUS ( INVENTORY_ITEM_ID, ORGANIZATION_ID, STATUS
_CODE, EFFECTIVE_DATE, PENDING_FLAG, LAST_UPDATE_DATE, LAST_UPDATED_BY, CREATION
_DATE, CREATED_BY, IMPLEMENTED_DATE, LIFECYCLE_ID, PHASE_ID ) SELECT I.INVENTORY
_ITEM_ID, I.ORGANIZATION_ID, I.INVENTORY_ITEM_STATUS_CODE, :B4 , :B6 , :B4 , :B5
2,132 2,132 1 2,132.0 0.0 6.0 c3c3a0h85quaf
Module: INCOIN
BEGIN ENI_ITEMS_STAR_PKG.Sync_
Star_Items_From_IOI (
p_api_version => :p_api_version , p_init_msg_list =>
:p_init_msg_list , p_set_process_id => :p_set_process_id , x_
2,128 2,128 1 2,128.0 0.0 5.9 cjws1uu98746f
Module: INCOIN
MERGE INTO eni_oltp_item_star STAR USING (SELECT item.inventory_item_id inve
ntory_item_id, item.organization_id organization_id,
item.CONCATENATED_SEGMENTS|| ' (' || mtp.organization_code || ')'value,
decode(item.organization_id,mtp.master_organization_id,null,
1,529 1,529 1 1,529.0 0.0 4.3 3wb45aaxhqd4v
Module: e:INV:bes:oracle.apps.ego.item.postRevisionChang
BEGIN ICX_CAT_POPULATE_MI_GRP.