The document discusses addressing performance issues using a "drilldown approach". This approach involves first identifying if the database is overloaded, then identifying when it is overloaded, and finally identifying how database time is distributed in order to pinpoint bottlenecks. Various tools like AWR, Statspack, and code instrumentation are recommended to gather detailed performance data for analysis.
Understanding Average Active Sessions (AAS) is critical to understanding Oracle performance at the systemic level. This is my first presentation on the topic done at RMOUG Training Days in 2007. Later I will upload a more recent presentation on AAS from 2013.
Awr1page - Sanity checking time instrumentation in AWR reportsJohn Beresniewicz
Discusses Oracle time-based performance instrumentation as presented in AWR reports and inconsistencies between instrumentation sources that can cause confusion as conflicting information is presented. The cognitive load of investigating and reasoning about such conundrums is very high, discouraging even senior performance experts. A program (AWR1page) is discussed that consumes an AWR report and produces a 1-page normalized time summary by instrumentation source, precisely designed for reasoning about instrumentation inconsistencies in AWR reports.
ASHviz - Dats visualization research experiments using ASH dataJohn Beresniewicz
RMOUG Training Days 2020 abstract:
The Active Session History (ASH) mechanism is a rich source of fine-grained data about database activity, and is the lynchpin for many database performance management features in the Diagnostic and Tuning packs. Many interesting stories about happenings in the database are buried in ASH waiting to be revealed, and data visualization is key to sifting these out from the high dimensionality and volume of ASH data. The session will cover a number of data visualization experiments conducted using a single ASH dump with an emphasis on the iterative process of discovering useful data visualizations.
This short presentation is about the deeper meaning of the core Oracle performance metric "Average Active Sessions" as the time derivative of the DB Time function, which explains why the Enterprise Manager DB Performance Page is literally a picture of DB Time (as the integral of AAS) as well as why "ASH Math" works to estimate DB Time (it's a Riemann sum as in first-year calculus.) Also, the relationship of AAS to Little's Law in queueing theory is briefly mentioned.
Understanding Average Active Sessions (AAS) is critical to understanding Oracle performance at the systemic level. This is my first presentation on the topic done at RMOUG Training Days in 2007. Later I will upload a more recent presentation on AAS from 2013.
Awr1page - Sanity checking time instrumentation in AWR reportsJohn Beresniewicz
Discusses Oracle time-based performance instrumentation as presented in AWR reports and inconsistencies between instrumentation sources that can cause confusion as conflicting information is presented. The cognitive load of investigating and reasoning about such conundrums is very high, discouraging even senior performance experts. A program (AWR1page) is discussed that consumes an AWR report and produces a 1-page normalized time summary by instrumentation source, precisely designed for reasoning about instrumentation inconsistencies in AWR reports.
ASHviz - Dats visualization research experiments using ASH dataJohn Beresniewicz
RMOUG Training Days 2020 abstract:
The Active Session History (ASH) mechanism is a rich source of fine-grained data about database activity, and is the lynchpin for many database performance management features in the Diagnostic and Tuning packs. Many interesting stories about happenings in the database are buried in ASH waiting to be revealed, and data visualization is key to sifting these out from the high dimensionality and volume of ASH data. The session will cover a number of data visualization experiments conducted using a single ASH dump with an emphasis on the iterative process of discovering useful data visualizations.
This short presentation is about the deeper meaning of the core Oracle performance metric "Average Active Sessions" as the time derivative of the DB Time function, which explains why the Enterprise Manager DB Performance Page is literally a picture of DB Time (as the integral of AAS) as well as why "ASH Math" works to estimate DB Time (it's a Riemann sum as in first-year calculus.) Also, the relationship of AAS to Little's Law in queueing theory is briefly mentioned.
Oracle Performance Tuning Training | Oracle Performance TuningOracleTrainings
Email: inbox.oracletrainings@gmail.com
Contact: +91 8121 020 111
Oracle Performance Tuning Training helps to use Oracle Database performance tools in the command-line interface to optimize database performance and tune SQL statements.
Oracle Performance Tuning Training Course Content
Overview Of Performance Tuning
• Basic Concepts in Performance Tuning
• Getting Start with Performance Tuning Features & Tools
Designing & Developing for Performance
• Oracle_Methodology
• Understanding the Investment Options
• Understanding the Scalability
• System _Architecture
• The Application Design Principles
• Workload Testing, Modeling, & Implementation
• Deploying the New Applications
The Performance Improvement Methods
• Oracle Performance Improvement Method
• Emergency Performance_Methods
Configuring an Database for Performance
• Performance Considerations for the Initial Instance Configuration
• Creating & Maintaining Tables for Good Performance
• Performance Considerations for the Shared Servers
Automatic Performance Statistics
• Data Gathering
• Overview toAutomatic Workload Repository
The Automatic Performance Diagnostics
• Introduction to the Database Diagnostic Monitoring
• Automatic Database Diagnostic_ Monitor
Memory Configuration & Use
• Understanding the Memory Allocation Issues
• Configuring & Using the Buffer Cache
• Configuring & Using the Shared Pool and Large Pool
• Configuring & Using the Redo Log Buffer
• PGA Memory Management
I/O Configuration & Design
• Understanding the I/O
• The Basic I/O Configuration
Understanding The Operating System Resources
• Understanding The Operating System Performance Issues
• Solving the Operating System Problems
• Understanding Of CPU
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
Understanding SQL Trace, TKPROF and Execution Plan for beginnersCarlos Sierra
The three fundamental steps of SQL Tuning are: 1) Diagnostics Collection; 2) Root Cause Analysis (RCA); and 3) Remediation. This introductory session on SQL Tuning is for novice DBAs and Developers that are required to investigate a piece of an application performing poorly.
On this session participants will learn about producing a SQL Trace then a summary TKPROF report. A sample TKPROF is navigated with the audience, where the trivial and the no so trivial is exposed and explain. Execution Plans are also navigated and explained, so participants can later untangle complex Execution Plans and start diagnosing SQL performing badly.
This session encourages participants to ask all kind of questions that could be potential road-blocks for deeper understanding of how to address a SQL performing poorly.
Apache Spark presentation at HasGeek FifthElelephant
https://fifthelephant.talkfunnel.com/2015/15-processing-large-data-with-apache-spark
Covering Big Data Overview, Spark Overview, Spark Internals and its supported libraries
700 Updatable Queries Per Second: Spark as a Real-Time Web ServiceEvan Chan
700 Updatable Queries Per Second: Spark as a Real-Time Web Service. Find out how to use Apache Spark with FiloDb for low-latency queries - something you never thought possible with Spark. Scale it down, not just scale it up!
Building a High-Performance Database with Scala, Akka, and SparkEvan Chan
Here is my talk at Scala by the Bay 2016, Building a High-Performance Database with Scala, Akka, and Spark. Covers integration of Akka and Spark, when to use actors and futures, back pressure, reactive monitoring with Kamon, and more.
Accelerating Data Processing in Spark SQL with Pandas UDFsDatabricks
Spark SQL provides a convenient layer of abstraction for users to express their query’s intent while letting Spark handle the more difficult task of query optimization. Since spark 2.3, the addition of pandas UDFs allows the user to define arbitrary functions in python that can be executed in batches, allowing the user the flexibility required to write queries that suit very niche cases.
Survey of some free Tools to enhance your SQL Tuning and Performance Diagnost...Carlos Sierra
We know there are a few tools that can help you enhance your abilities to do SQL Tuning or diagnose your database performing poorly besides those provided by the product. Some are free and some are not. Some are good and some not quite. Some are easy to use and understand and some are not. Some of them help with the entire database while some focus on a particular SQL statement. Most of these tools fill gaps and enhance the product. Which ones are the ones we recommend and why?
This session is about getting to know some free tools, that can help you improve your diagnostics collections and skills when it comes to both SQL Tuning and overall Oracle database Performance Diagnostics. The survey presented includes for the entire database: snapper, TUNAs360 and eDB360. Then for SQL Tuning: planx, sqlash, sqlmon, SQLTXPLAIN and SQLd360.
Structured streaming plays an important role in current data infrastructure. In response to tremendous streaming requirements, we have actively worked on developing structured streaming in Spark in the past few months. In this talk, Kristine Guo and Liang-Chi Hsieh will detail some of the issues that arose when applying structured streaming and what was done to address them. Specifically, they will cover:
How streaming applications that need to maintain large amounts of state require a scalable state store provider as an alternative to the in-memory solution built in with Spark.
Structured streaming is currently missing session window support and although a map/flatMapWithState API may be used to implement a custom window, this approach does not generalize well across applications and is hard to maintain.
Why we focused on structured streaming efforts like RocksDB state store and session windowing.
Finally, they will detail how these features can help to compute aggregates over dynamic batches with minimum size requirements and perform stream-stream joins, while supporting high RPS and throughput.
SQL Server Tuning to Improve Database PerformanceMark Ginnebaugh
SQL Server tuning is a process to eliminate performance bottlenecks and improve application service. This presentation from Confio Software discusses SQL diagramming, wait type data, column selectivity, and other solutions that will help make tuning projects a success, including:
•SQL Tuning Methodology
•Response Time Tuning Practices
•How to use SQL Diagramming techniques to tune SQL statements
•How to read executions plans
Oracle Performance Tuning Training | Oracle Performance TuningOracleTrainings
Email: inbox.oracletrainings@gmail.com
Contact: +91 8121 020 111
Oracle Performance Tuning Training helps to use Oracle Database performance tools in the command-line interface to optimize database performance and tune SQL statements.
Oracle Performance Tuning Training Course Content
Overview Of Performance Tuning
• Basic Concepts in Performance Tuning
• Getting Start with Performance Tuning Features & Tools
Designing & Developing for Performance
• Oracle_Methodology
• Understanding the Investment Options
• Understanding the Scalability
• System _Architecture
• The Application Design Principles
• Workload Testing, Modeling, & Implementation
• Deploying the New Applications
The Performance Improvement Methods
• Oracle Performance Improvement Method
• Emergency Performance_Methods
Configuring an Database for Performance
• Performance Considerations for the Initial Instance Configuration
• Creating & Maintaining Tables for Good Performance
• Performance Considerations for the Shared Servers
Automatic Performance Statistics
• Data Gathering
• Overview toAutomatic Workload Repository
The Automatic Performance Diagnostics
• Introduction to the Database Diagnostic Monitoring
• Automatic Database Diagnostic_ Monitor
Memory Configuration & Use
• Understanding the Memory Allocation Issues
• Configuring & Using the Buffer Cache
• Configuring & Using the Shared Pool and Large Pool
• Configuring & Using the Redo Log Buffer
• PGA Memory Management
I/O Configuration & Design
• Understanding the I/O
• The Basic I/O Configuration
Understanding The Operating System Resources
• Understanding The Operating System Performance Issues
• Solving the Operating System Problems
• Understanding Of CPU
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
Understanding SQL Trace, TKPROF and Execution Plan for beginnersCarlos Sierra
The three fundamental steps of SQL Tuning are: 1) Diagnostics Collection; 2) Root Cause Analysis (RCA); and 3) Remediation. This introductory session on SQL Tuning is for novice DBAs and Developers that are required to investigate a piece of an application performing poorly.
On this session participants will learn about producing a SQL Trace then a summary TKPROF report. A sample TKPROF is navigated with the audience, where the trivial and the no so trivial is exposed and explain. Execution Plans are also navigated and explained, so participants can later untangle complex Execution Plans and start diagnosing SQL performing badly.
This session encourages participants to ask all kind of questions that could be potential road-blocks for deeper understanding of how to address a SQL performing poorly.
Apache Spark presentation at HasGeek FifthElelephant
https://fifthelephant.talkfunnel.com/2015/15-processing-large-data-with-apache-spark
Covering Big Data Overview, Spark Overview, Spark Internals and its supported libraries
700 Updatable Queries Per Second: Spark as a Real-Time Web ServiceEvan Chan
700 Updatable Queries Per Second: Spark as a Real-Time Web Service. Find out how to use Apache Spark with FiloDb for low-latency queries - something you never thought possible with Spark. Scale it down, not just scale it up!
Building a High-Performance Database with Scala, Akka, and SparkEvan Chan
Here is my talk at Scala by the Bay 2016, Building a High-Performance Database with Scala, Akka, and Spark. Covers integration of Akka and Spark, when to use actors and futures, back pressure, reactive monitoring with Kamon, and more.
Accelerating Data Processing in Spark SQL with Pandas UDFsDatabricks
Spark SQL provides a convenient layer of abstraction for users to express their query’s intent while letting Spark handle the more difficult task of query optimization. Since spark 2.3, the addition of pandas UDFs allows the user to define arbitrary functions in python that can be executed in batches, allowing the user the flexibility required to write queries that suit very niche cases.
Survey of some free Tools to enhance your SQL Tuning and Performance Diagnost...Carlos Sierra
We know there are a few tools that can help you enhance your abilities to do SQL Tuning or diagnose your database performing poorly besides those provided by the product. Some are free and some are not. Some are good and some not quite. Some are easy to use and understand and some are not. Some of them help with the entire database while some focus on a particular SQL statement. Most of these tools fill gaps and enhance the product. Which ones are the ones we recommend and why?
This session is about getting to know some free tools, that can help you improve your diagnostics collections and skills when it comes to both SQL Tuning and overall Oracle database Performance Diagnostics. The survey presented includes for the entire database: snapper, TUNAs360 and eDB360. Then for SQL Tuning: planx, sqlash, sqlmon, SQLTXPLAIN and SQLd360.
Structured streaming plays an important role in current data infrastructure. In response to tremendous streaming requirements, we have actively worked on developing structured streaming in Spark in the past few months. In this talk, Kristine Guo and Liang-Chi Hsieh will detail some of the issues that arose when applying structured streaming and what was done to address them. Specifically, they will cover:
How streaming applications that need to maintain large amounts of state require a scalable state store provider as an alternative to the in-memory solution built in with Spark.
Structured streaming is currently missing session window support and although a map/flatMapWithState API may be used to implement a custom window, this approach does not generalize well across applications and is hard to maintain.
Why we focused on structured streaming efforts like RocksDB state store and session windowing.
Finally, they will detail how these features can help to compute aggregates over dynamic batches with minimum size requirements and perform stream-stream joins, while supporting high RPS and throughput.
SQL Server Tuning to Improve Database PerformanceMark Ginnebaugh
SQL Server tuning is a process to eliminate performance bottlenecks and improve application service. This presentation from Confio Software discusses SQL diagramming, wait type data, column selectivity, and other solutions that will help make tuning projects a success, including:
•SQL Tuning Methodology
•Response Time Tuning Practices
•How to use SQL Diagramming techniques to tune SQL statements
•How to read executions plans
Tarabica 2019 (Belgrade, Serbia) - SQL Server performance troubleshootingJovan Popovic
Finding and fixing performance issues in SQL Server and the Azure SQL database requires understanding how database engine works and what can affect performance. People sometime make changes without finding the exact cause of the problem, which causes additional issues in the future. In this presentation, we will see some techniques you can apply to identify problems and solutions using Query Store technology, DMVs, SQL plan analysis, etc.
https://www.tarabica.org/Session/Details/78
Learn from the author of SQLTXPLAIN the fundamentals of SQL Tuning: 1) Diagnostics Collection; 2) Root Cause Analysis (RCA); and 3) Remediation.
SQL Tuning is a complex and intimidating area of knowledge, and it requires years of frequent practice to master it. Nevertheless, there are some concepts and practices that are fundamental to succeed. From basic understanding of the Cost-based Optimizer (CBO) and the Execution Plans, to more advance topics such as Plan Stability and the caveats of using SQL Profiles and SQL Plan Baselines, this session is full of advice and experience sharing. Learn what works and what doesn't when it comes to SQL Tuning.
Participants of this session will also learn about several free tools (besides SQLTXPLAIN) that can be used to diagnose a SQL statement performing poorly, and some others to improve Execution Plan Stability.
Either if your are a novice DBA, or an experienced DBA or Developer, there will be something new for you on this session. And if this is your first encounter with SQL Tuning, at least you will learn the basic concepts and steps to succeed in your endeavor.
Apache Spark for RDBMS Practitioners: How I Learned to Stop Worrying and Lov...Databricks
This talk is about sharing experience and lessons learned on setting up and running the Apache Spark service inside the database group at CERN. It covers the many aspects of this change with examples taken from use cases and projects at the CERN Hadoop, Spark, streaming and database services. The talks is aimed at developers, DBAs, service managers and members of the Spark community who are using and/or investigating “Big Data” solutions deployed alongside relational database processing systems. The talk highlights key aspects of Apache Spark that have fuelled its rapid adoption for CERN use cases and for the data processing community at large, including the fact that it provides easy to use APIs that unify, under one large umbrella, many different types of data processing workloads from ETL, to SQL reporting to ML.
Spark can also easily integrate a large variety of data sources, from file-based formats to relational databases and more. Notably, Spark can easily scale up data pipelines and workloads from laptops to large clusters of commodity hardware or on the cloud. The talk also addresses some key points about the adoption process and learning curve around Apache Spark and the related “Big Data” tools for a community of developers and DBAs at CERN with a background in relational database operations.
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...javier ramirez
En esta sesión voy a contar las decisiones técnicas que tomamos al desarrollar QuestDB, una base de datos Open Source para series temporales compatible con Postgres, y cómo conseguimos escribir más de cuatro millones de filas por segundo sin bloquear o enlentecer las consultas.
Hablaré de cosas como (zero) Garbage Collection, vectorización de instrucciones usando SIMD, reescribir en lugar de reutilizar para arañar microsegundos, aprovecharse de los avances en procesadores, discos duros y sistemas operativos, como por ejemplo el soporte de io_uring, o del balance entre experiencia de usuario y rendimiento cuando se plantean nuevas funcionalidades.
This session is for you if you want to learn tips and techniques that are used to optimize database development with special emphasis on SQL Server 2005. If you write lot of stored procedures and want to learn the tools of a DBA, this is the session for you. If you are new to SQL Server development environment, you will learn how the various constructs compare to each other and better performance can be produced every time with a brief introduction to understanding Execution Plans.
Python and Oracle : allies for best of data managementLaurent Leturgez
In this presentation, I described Python and how Python can Interact with Oracle database, and Oracle Cloud Infrastructure in various project : from data visualisation to data science.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
2. Whoami
• Oracle Consultant since 2001
• Former developer (C, Java, perl, PL/SQL)
• Hadoop aficionado
• Owner@Premiseo: Data Management on Premises and in the
Cloud
• Blogger since 2004
• http://laurent-leturgez.com
• Twitter : @lleturgez
4. Agenda
• The Drill down approach
• It’s [always] a question of time
• Active Average Session and Database load
• Identify Bottlenecks with various tools
• Qualify identified bottlenecks to reduce time consumption
• Various tools for a better analysis
• Code instrumentation
• PL/SQL Profiling
6. Addressing a performance issue.
The drilldown approach
• Have you ever met this kind of
user reactions ?
My Application is
slow … can you
help me ?
… it must be the
database ?
It’s slow … (or it’s
hang) … help !
• Have you ever answer
them this ?
7. Addressing a performance issue.
The drilldown approach
• Usually, we need more information
When ?
Any error
message ?
Is it general
or specific
use case ?
Are you sure …
it’s really the
database ?
How the
problem
occured ?
• We need to trust our user (or interview more than one user).
• We have to analyze the issue by ourself.
8. Addressing a performance issue.
The drilldown approach
• Time is the key of analysis
• A session can spend time in different ways
• It waits for work to do → Idle Wait time
• It waits for a system call or something to complete (Waiting for a lock, an IO
etc.)→ Active Wait Time (or Non Idle Wait time)
• It executes oracle code on CPU → DB CPU Time
• Active time in a session
• Active Wait Time + DB CPU Time
• Active time in Database is DB Time
• DB Time = σ 𝑆𝐼𝐷=𝑛
𝑆𝐼𝐷=1
(𝐴𝑐𝑡𝑖𝑣𝑒 𝑊𝑎𝑖𝑡 𝑇𝑖𝑚𝑒 + 𝐷𝐵 𝐶𝑃𝑈 𝑇𝑖𝑚𝑒)
9. Addressing a performance issue.
The drilldown approach
• For a Session
• 0 ≤ 𝐷𝐵 𝑇𝑖𝑚𝑒 ≤ 𝐸𝑙𝑎𝑝𝑠𝑒𝑑 𝑡𝑖𝑚𝑒
• DB CPU Time = 3 sec
• Non Idle wait time = 2 + 8 + 3,5 = 13,5 sec
• DB Time = 3 + 13,5 = 16,5 sec
• Elapsed Time = 60 sec
• At Database level
• 𝐴𝑐𝑡𝑖𝑣𝑒 𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑆𝑒𝑠𝑠𝑖𝑜𝑛 𝐴𝐴𝑆 =
σ 𝐷𝐵𝑇𝑖𝑚𝑒
𝐸𝑙𝑎𝑝𝑠𝑒𝑑 𝑇𝑖𝑚𝑒
User IO
DB CPU
TX contention
2 sec. 8 sec. 3 sec. 3,5 sec.
Elapsed Time = 60 sec
10. Addressing a performance issue.
The drilldown approach
• Database Time for a not overloaded system (2 CPUs)
User IO
DB CPU
TX contention
11. Addressing a performance issue.
The drilldown approach
• Database Time for an overloaded system … short period (2 CPUs)
User IO
DB CPU
TX contention
12. Addressing a performance issue.
The drilldown approach
• Database Time for an overloaded system (2 CPUs)
User IO
DB CPU
TX contention
13. Addressing a performance issue.
The drilldown approach
• Active Average Session is a key indicator for database load
• AAS = or close to 0: Database is idle
• AAS < # CPU Cores, no system bottleneck
• AAS ~ # CPU Cores, database uses all system resources (If one DB / system)
• AAS > # CPU Cores, Database loaded (depends on the part of CPU in AAS)
• AAS >> 2 x # CPU Cores, Database is overloaded
• Database load = AAS / # CPU Core
• DB Load = or close to 0: Database is idle
• DB Load < 1, no system bottleneck
• DB Load ~ 1, database uses all system resources (If one DB / system)
• DB Load > 1, Database loaded (depends on the part of CPU in AAS)
• DB Load >> 2 , Database is overloaded
14. Addressing a performance issue.
The drilldown approach
• When AAS/Database load is on top ?
• Identify AAS or Database load peak time
• Not very easy with AWR or Statspack Reports
• A little bit easier with EM top Activity page (Performance tab)
• Trending and Data visualisation is the solution
• Explore AWR tables and views (and ASH)
• Explore Statspack tables and views
• Graph and plot
• Heatmap
• Trends
15. Addressing a performance issue.
The drilldown approach
• Trending and Data visualisation of AAS
• Granularity matters
• From ASH, you can get AAS
• every seconds → V$ACTIVE_SESSION_HISTORY (or less by modifying
“_ash_sampling_interval”)
• every 10 secs → DBA_HIST_ACTIVE_SESS_HISTORY
• Or more …. by writing the correct SQL statement
• From STATSPACK, you can get AAS from the time model analysis between
two snapshots
16. Addressing a performance issue.
The drilldown approach
• Trending and Data visualisation of AAS
• Granularity matters: example AAS every hour from
DBA_HIST_ACTIVE_SESS_HISTORY (Thanks Marcin Przepiórowski)
SELECT TO_CHAR(sample_time,'YYYY-MM-DD HH24') mtime,
round(decode(session_state,'WAITING',count(*),0)/360,2) aas_wait,
round(decode(session_state,'ON CPU',count(*),0) /360,2) aas_cpu,
round(count(*)/360,2) aas
FROM dba_hist_active_sess_history
GROUP BY to_char(sample_time,'YYYY-MM-DD HH24'),
session_state
ORDER BY mtime
18. Addressing a performance issue.
The drilldown approach
• But my database is overloaded or not ?
• Add CPU Core number, directly in your graph.
-2
0
2
4
6
8
10
12
14
16
0 2 4 6 8 10 12 14 16 18
AAS_CPU
AAS
CORE
19. Addressing a performance issue.
The drilldown approach
• But my database is overloaded or not ?
• Or modify your query to get the db load and plot it directly.
SELECT mtime,
ROUND(SUM(load),2) LOAD
FROM
(SELECT TO_CHAR(sample_time,'YYYY-MM-DD HH24') mtime,
DECODE(session_state,'WAITING',COUNT(*),0)/360 c1,
DECODE(session_state,'ON CPU',COUNT( *),0) /360 c2,
COUNT(*)/360 cnt,
COUNT(*)/360/cpu.core_nb load
FROM dba_hist_active_sess_history,
(SELECT value AS core_nb
FROM v$osstat
WHERE stat_name='NUM_CPU_CORES’
) cpu
GROUP BY TO_CHAR(sample_time,'YYYY-MM-DD HH24'),
session_state,
cpu.core_nb
)
GROUP BY mtime
ORDER BY mtime;
MTIME LOAD
------------- ----------
2017-09-20 00 .4
2017-09-20 01 .43
2017-09-20 02 1.03
2017-09-20 03 .69
2017-09-20 04 .84
2017-09-20 05 .07
2017-09-20 06 .01
2017-09-20 07 .36
2017-09-20 08 .05
2017-09-20 09 .29
2017-09-20 10 .33
2017-09-20 11 .32
2017-09-20 12 .2
2017-09-20 13 .31
2017-09-20 14 .95
2017-09-20 15 .4
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
0 5 10 15 20
LOAD
20. Addressing a performance issue.
The drilldown approach
• Heatmap to identify bottleneck
• Based on previous queries and Oracle PIVOT function
• See:
https://laurent-leturgez.com/2016/12/15/database-load-heatmap-with-awr-and-python/
• Dataviz can be done with various tools:
• Tableau software
• Microsoft Excel with conditional formatting
• Python with plotly library
23. Addressing a performance issue.
The drilldown approach
• So what is this drilldown approach ?
• First identify if the database is overloaded (User Interviews, AAS wide
analysis)
• Then, identify when the database is overloaded (Heatmap, AAS trending)
• Then, identify how the DB time is distributed (AAS trending)
• More CPU Time than Active Wait time ?
• More Active wait time than CPU Time ?
• Run the AWR Report or Statspack to get more details (AWR and SP reports)
• Reduce CPU Time or Active Wait Time or Both … (with the help of your brain
!!)
• If more CPU time, analyze SQL statements that burns buffer cache for example
• If more Active wait time, identify which one(s), resolve the issue(s)
24. Addressing a performance issue.
The drilldown approach
• So what is this drilldown approach ?
Identify if the database is overloaded
(User Interviews, AAS wide analysis)
Identify when the database is overloaded
(Heatmap, AAS trending)
• More CPU Time than Active Wait time ?
• More Active wait time than CPU Time ?
identify how the DB time is
distributed (AAS trending) ?
Run the AWR Report or
Statspack to get more
details
• If more CPU time, analyze SQL statements that burns buffer cache for
example
• If more Active wait time, identify which one(s), resolve the issue(s)
Reduce CPU Time
or Active Wait
Time or Both …
25. Addressing a performance issue.
The drilldown approach
• Ok, but if I haven’t bought Diagnostics Pack or if I run a Standard
edition ?
Heatmap not possible because it’s based on ASH
• You can graph AAS, AAS_WAIT and AAS_CPU on a large period
• Then reduce time scale, redo the same AAS trending
• How ?
• See:
https://laurent-leturgez.com/2015/11/06/active-average-session-trending-in-statspack/
• Time Model Analysis with a specific function
• Get DB Time and DB CPU, and calculate Active Wait Time for every period between two
snapshots
• Calculate AAS = DB Time / Elapsed for every period between two snapshots
• And plot it !
28. Addressing a performance issue.
The drilldown approach
• Code Instrumentation
• Use of DBMS_APPLICATION_INFO
• Add information in V$SESSION, V$SESSION_LONGOPS, V$SQL_MONITOR,
V$SQL (and some others)
• MODULE
• ACTION
• CLIENT_INFO (Only in V$SESSION and V$SQL_MONITOR)
• Then dispatched to
• ASH (V$ACTIVE_SESSION_HISTORY, DBA_HIST_ACTIVE_SESS_HISTORY)
• AWR (DBA_HIST_SQLSTAT)
• Statpack (only Module for STATS$V_$SQLXS, STATS$SQL_SUMMARY and
STATS$TEMP_SQLSTATS)
Note: CLIENT_INFO is not dispatched
29. Addressing a performance issue.
The drilldown approach
• Code Instrumentation
Without Code
instrumentation
With Code
instrumentation
30. Addressing a performance issue.
The drilldown approach
• Code Instrumentation: Java sample code
public static void main(String[] args) throws Exception {
DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
// Warning: this is a simple example program : In a long running application,
// error handlers MUST clean up connections statements and result sets.
String module, prev_module;
String action, prev_action;
Connection c = DriverManager.getConnection("jdbc:oracle:thin:@192.168.99.8:1521:orcl", "system", "oracle");
CallableStatement call = c.prepareCall("begin dbms_application_info.set_module(module_name => ?, action_name => ?); end;");
module="PAYCHECK"; action="HEADER";
try{
call.setString(1,module); call.setString(2,action);
call.execute();
}
catch (SQLException e) {e.printStackTrace();}
// PAYCHECK HEADER EDITION … HERE
module="PAYCHECK"; action="MAIN";
try{
call.setString(1,module); call.setString(2,action);
call.execute();
}
catch (SQLException e) {e.printStackTrace();}
finally {call.close();}
// PAYCHECK MAIN PART EDITION … HERE
c.close();
}
Backup previous module
and action with
dbms_application_info.
get_module function
31. Addressing a performance issue.
The drilldown approach
• Code Instrumentation : the drilldown approach
Identify top module activity
For top modules, identify the top
Action
When possible, identify
client with CLIENT_INFO
• AWR / ASH
• TOP SQL identification in this module / action
• Code inspection
• Code profiling
Complete performance
analysis of the specific
Module/action
32. Addressing a performance issue.
The drilldown approach
• PLSQL Profiling
• Profile runtime behaviour of PLSQL code
• Allow bottleneck identification
• Introduced in Oracle 8i
• Oracle 11gR1 introduced hierarchical PLSQL profiler
33. Addressing a performance issue.
The drilldown approach
• PLSQL Profiling : How does it work ?
DBMS_PROFILER.
start_profiler
DBMS_PROFILER.
stop_profiler
R
U
N
T
I
M
E
34. Addressing a performance issue.
The drilldown approach
• Real Life example #1
• Night batch is too long (and ends after 4AM)
• Oracle 10g → classic profiler
• The culprit is the billing (sub)-batch (PLSQL) → Profiling
35. Addressing a performance issue.
The drilldown approach
• Real Life example #1 … results
UNIT_NAME LINE# TOTAL_OCCUR TOTAL_SEC MIN_SEC MAX_SEC
--------------- ----- ----------- ---------- ---------- ----------
.../...
XNPCK_XXTRACE 56 403076907 1001,42 0 ,02
XNPCK_XXTRACE 57 403076907 292,88 0 ,02
XNPCK_XXTRACE 58 403076907 280,86 0 ,01
XNPCK_XXTRACE 61 403076907 278,86 0 ,04
XNPCK_XXTRACE 67 403076907 0 0 0
XNPCK_XXTRACE 69 403076907 275,28 0 ,02
XNPCK_XXTRACE 70 0 0 0 0
XNPCK_XXTRACE 71 0 0 0 0
XNPCK_XXTRACE 72 0 0 0 0
XNPCK_XXTRACE 73 0 0 0 0
XNPCK_XXTRACE 75 403076907 997,55 0 ,02
XNPCK_XXTRACE 76 403076907 0 0 0
XNPCK_XXTRACE 79 403076907 658,23 0 ,03
.../...
XNPCK_XXTRACE 88 403076907 0 0 0
XNPCK_XXTRACE 90 403076907 540,78 0 ,02
XNPCK_XXTRACE 91 0 0 0 0
.../...
XNPCK_XXTRACE2 38 403076907 398,42 0 ,01
XNPCK_XXTRACE2 45 403076907 279,6 0 ,01
XNPCK_XXTRACE2 48 403076907 273,83 0 ,01
XNPCK_XXTRACE2 49 1 0 0 0
XNPCK_XXTRACE2 50 1 0 0 0
XNPCK_XXTRACE2 51 1 0 0 0
XNPCK_XXTRACE2 53 403076906 270,2 0 ,02
XNPCK_XXTRACE2 54 403076906 298,52 0 ,02
XNPCK_XXTRACE2 56 403076907 315,83 0 ,02
XNPCK_XXTRACE2 59 0 0 0 0
XNPCK_XXTRACE2 60 403076907 319,54 0 ,03
Line 56 of package body XNPCK_XXTRACE has been
executed 403076907 times.
Each execution took 0 to 0,02 sec
The global time for this step is 1001 seconds
What are these packages names ??? xxTRACExx …
After having a look in the code near these lines numbers,
We found something like that:
file1:= utl_file.fopen('UTL_DIR’,’debug.txt','w');
utl_file.put_line(file1,’Some information' );
utl_file.fclose(file1);
In a loop !!
36. Addressing a performance issue.
The drilldown approach
• PLSQL Hierarchical Profiler : How does it work ?
DBMS_HPROF.start_profiling
(location=>DIR,
filename=>’profiler.txt’)
DBMS_HPROF.stop_profiling
R
U
N
T
I
M
E profiler.txt
37. Addressing a performance issue.
The drilldown approach
• Real Life example #2
• Customer complains about very slow application
• Culprit is the database (11gR2) → DB Load constantly upper to 4
• Customer bought new Oracle CPU, application capacity increases
• After a while, application becomes slow, application capacity cannot grow the
same way the application becomes popular
• After code instrumentation, a problematic code path is identified → profiling
(hierarchical)
38. Addressing a performance issue.
The drilldown approach
• Real Life example #2 … hierarchical profiler results
SUBTREE FUNCTION
ELAPSED TIME ELAPSED TIME
LEVEL NAME LINE# TYPE uSec uSec CALLS
---------- --------------------------------------------------------------------- ----- ----- ------------ ------------ ----------
1 MGUSER.PR_CALC_BULL_FIN_STC_AED 1 PLSQL 25203972 309 1
.../...
2 MGUSER.PR_TRAITEGEN_MULT_DEPART_CDD 1 PLSQL 25198639 1123 1
.../...
3 MGUSER.PR_TRAITEGEN_LISTE_DEPART_CDD 1 PLSQL 24703905 168802 1
.../...
4 MGUSER.PR_TRAITEGEN_LISTE_DEPART_CDD.__static_sql_exec_line509 509 SQL 209948 209948 3146
4 MGUSER.PR_TRAITEGEN_LISTE_DEPART_CDD.__static_sql_exec_line578 578 SQL 11620988 11620988 239
4 MGUSER.PR_TRAITEGEN_LISTE_DEPART_CDD.__static_sql_exec_line613 613 SQL 1133735 1133735 239
4 MGUSER.PR_TRAITEGEN_LISTE_DEPART_CDD.__static_sql_exec_line668 668 SQL 504919 504919 239
4 MGUSER.PR_TRAITEGEN_LISTE_DEPART_CDD.__static_sql_exec_line802 802 SQL 1272456 1272456 239
4 MGUSER.PR_TRAITEGEN_LISTE_DEPART_CDD.__static_sql_exec_line843 843 SQL 166459 166459 3146
.../...
This call took 168802 µsec
but its sub-calls took
24703905 µsec → need to
analyze next level (4)
This call took 11620988 µsec
No more sub-execution because Subtree time
= function time (and because it’s a SQL op).
A single execution took : 11620988 / 239 =
48623 µsec
Compared to other SQL in this level … it’s the
main time consumer
NAME TEXT LINE
------------------------------ ----------------------------------------------------------------- ----------
PR_TRAITEGEN_LISTE_DEPART_CDD begin 576
PR_TRAITEGEN_LISTE_DEPART_CDD PR_PERF('MILI', '','','AED'); 577
PR_TRAITEGEN_LISTE_DEPART_CDD select /*+ index(pgd PK_PARA_GENE_DNAC) */ 578
PR_TRAITEGEN_LISTE_DEPART_CDD id_para, 579
PR_TRAITEGEN_LISTE_DEPART_CDD 'PRES' as status_aed, 580
PR_TRAITEGEN_LISTE_DEPART_CDD date_gene, 581 Oh wait ! a hint … we analyzed the sqlplan,
tune it by simply remove the hint …
PROBLEM FIXED !!
39. Addressing a performance issue.
The drilldown approach
• PLSQL Hierarchical Profiler : Data Visualisation
• Tools exist to visualise PL/SQL hierarchical profiles
• SQL Developer
• Martin Büchi tools
• Set of packages (Java & PLSQL) to display PLSQL Profile in a web Browser
• Google Developer tools: cpuprofile
• Brendann Greg’s FlameGraph
40. Addressing a performance issue.
The drilldown approach
• PLSQL Hierarchical Profiler : Data Visualisation with Flame Graph
profiler.txt
SQL> exec ora_hprof#.flatten('WORK_DIR','profiling_4E0F4C0A96016C63E0537A1EA8C0113F_2202','profile_flat.txt');
$ flamegraph.pl /var/tmp/profile_flat.txt > /var/tmp/profile_flat.svg
41. Addressing a performance issue.
The drilldown approach
• Conclusion
• Key is time analysis
• Proceed from the general to the detail
• After identifying bottlenecks, use the right tool for the right job
• Code Instrumentation
• PLSQL Profiling (and hierarchical profiler)
• Better when using graphical tools