This document provides an overview of Active Session History (ASH), which is an Oracle database feature that samples database session states over time. It summarizes that ASH takes snapshots of active session states, including session details and wait events, and stores this data in memory and disks. It also outlines how ASH data can be used to estimate database time usage, identify tuning opportunities, and troubleshoot session issues. The document discusses the key concepts of how ASH works, the dimensions of data that are sampled, and how parameters can control the sampling process.
ASH and AWR Performance Data by Kellyn Pot'VinEnkitec
This document provides an overview of Ash and AWR performance data. It discusses the history and purpose of ASH and AWR, how the AWR repository works, what data is contained in ASH samples, and how to run various ASH and AWR reports through the command line and Enterprise Manager. Specific examples are given around using ASH and AWR data to diagnose a blocking session issue on a RAC database. Best practices for querying ASH data directly are also covered.
This use case involves identifying and resolving a slow ETL process that was taking over 20 hours to complete each month. To troubleshoot, the author queried ASH and AWR to identify the top SQL statements by sample count and execution statistics. This revealed two SQL statements accounting for over 45% of samples during the ETL window. The author then analyzed the execution plans for these statements to identify performance issues and apply optimizations to reduce the ETL time.
This document provides an overview of Oracle's Active Session History (ASH) feature. ASH samples database sessions every second to capture session states and activity. It stores this data in an in-memory circular buffer and periodically writes samples to disk for analysis. ASH data provides insights into database time usage, top SQL, wait events, and blocking issues. It can be used for performance analysis by aggregating and analyzing ASH dimensions like SQL_ID, event, and wait class over time.
This document provides an overview of the Automatic Workload Repository (AWR) and Active Session History (ASH) features in Oracle Database 12c. It discusses how AWR and ASH work, how to access and interpret their reports through the Oracle Enterprise Manager console and command line interface. Specific sections cover parsing AWR reports, querying ASH data directly, and using features like the SQL monitor to diagnose performance issues.
The document outlines Oracle's general product direction and provides examples of new features in Oracle Database 12c, including temporal data support and pattern matching in SQL. Temporal features allow tracking of transactional and valid time periods to model historical and valid data. Pattern matching in SQL uses regular expressions to simplify analysis of big data and identify patterns and events. Examples show querying historical customer data and detecting stock price patterns. The information is provided for information only and Oracle retains sole discretion over product features.
This document discusses features of Oracle Database 12c related to Automatic Workload Repository (AWR), Active Session History (ASH), and Automatic Database Diagnostic Monitor (ADDM). It provides an overview of AWR and ASH, how they have evolved, and how they can be used to analyze database performance. It also demonstrates how AWR, ASH, and related performance data can be accessed and analyzed using Oracle Enterprise Manager 12c and command line interfaces.
This document provides an overview of Active Session History (ASH), which is an Oracle database feature that samples database session states over time. It summarizes that ASH takes snapshots of active session states, including session details and wait events, and stores this data in memory and disks. It also outlines how ASH data can be used to estimate database time usage, identify tuning opportunities, and troubleshoot session issues. The document discusses the key concepts of how ASH works, the dimensions of data that are sampled, and how parameters can control the sampling process.
ASH and AWR Performance Data by Kellyn Pot'VinEnkitec
This document provides an overview of Ash and AWR performance data. It discusses the history and purpose of ASH and AWR, how the AWR repository works, what data is contained in ASH samples, and how to run various ASH and AWR reports through the command line and Enterprise Manager. Specific examples are given around using ASH and AWR data to diagnose a blocking session issue on a RAC database. Best practices for querying ASH data directly are also covered.
This use case involves identifying and resolving a slow ETL process that was taking over 20 hours to complete each month. To troubleshoot, the author queried ASH and AWR to identify the top SQL statements by sample count and execution statistics. This revealed two SQL statements accounting for over 45% of samples during the ETL window. The author then analyzed the execution plans for these statements to identify performance issues and apply optimizations to reduce the ETL time.
This document provides an overview of Oracle's Active Session History (ASH) feature. ASH samples database sessions every second to capture session states and activity. It stores this data in an in-memory circular buffer and periodically writes samples to disk for analysis. ASH data provides insights into database time usage, top SQL, wait events, and blocking issues. It can be used for performance analysis by aggregating and analyzing ASH dimensions like SQL_ID, event, and wait class over time.
This document provides an overview of the Automatic Workload Repository (AWR) and Active Session History (ASH) features in Oracle Database 12c. It discusses how AWR and ASH work, how to access and interpret their reports through the Oracle Enterprise Manager console and command line interface. Specific sections cover parsing AWR reports, querying ASH data directly, and using features like the SQL monitor to diagnose performance issues.
The document outlines Oracle's general product direction and provides examples of new features in Oracle Database 12c, including temporal data support and pattern matching in SQL. Temporal features allow tracking of transactional and valid time periods to model historical and valid data. Pattern matching in SQL uses regular expressions to simplify analysis of big data and identify patterns and events. Examples show querying historical customer data and detecting stock price patterns. The information is provided for information only and Oracle retains sole discretion over product features.
This document discusses features of Oracle Database 12c related to Automatic Workload Repository (AWR), Active Session History (ASH), and Automatic Database Diagnostic Monitor (ADDM). It provides an overview of AWR and ASH, how they have evolved, and how they can be used to analyze database performance. It also demonstrates how AWR, ASH, and related performance data can be accessed and analyzed using Oracle Enterprise Manager 12c and command line interfaces.
The document discusses various performance tuning concepts in Oracle including CPU time, database time, reading Statspack/AWR reports, parse CPU to parse elapsed ratio, execute to parse ratio, latches, and wait events. It provides explanations and examples for each concept to help understand how to analyze performance issues from monitoring reports.
The document discusses analyzing database systems using a 3D method for performance analysis. It introduces the 3D method, which looks at performance from the perspectives of the operating system (OS), Oracle database, and applications. The 3D method provides a holistic view of the system that can help identify issues and direct solutions. It also covers topics like time-based analysis in Oracle, how wait events are classified, and having a diagnostic framework for quick troubleshooting using tools like the Automatic Workload Repository report.
This document provides an overview of performance monitoring capabilities in Oracle Database 12c and Enterprise Manager 13c. It discusses the Automatic Workload Repository (AWR) and Active Session History (ASH), which capture database performance statistics. The document outlines changes and enhancements to AWR and ASH in areas like in-memory, manageability reporting, and usability. It also discusses related features like the AWR warehouse and SQL Monitor.
This is the presentation on ASH that I did with Graham Wood at RMOUG 2014 and that represents the final best effort to capture essential and advanced ASH content as started in a presentation Uri Shaft and I gave at a small conference in Denmark sometime in 2012 perhaps. The presentation is also available publicly through the RMOUG website, so I felt at liberty to post it myself here. If it disappears it would likely be because I have been asked to remove it by Oracle.
This document provides an overview of Automatic Workload Repository (AWR) and Active Session History (ASH) in Oracle databases. AWR collects workload statistics and creates snapshots of the database to be used for performance monitoring. ASH samples active database sessions every second. Together, AWR and ASH provide historical performance data that can be queried and analyzed. The document discusses how to generate and interpret various AWR and ASH reports to identify top SQL, sessions, waits, and troubleshoot performance issues.
Getting optimal performance from oracle e-business suite presentationBerry Clemens
The document provides guidance on optimizing performance of the Oracle E-Business Suite applications tier. It recommends staying current with the latest release updates and family packs. It also provides tips on optimizing logging settings, workflow processes, Forms processes, JVM processes, and sizing the middle tier for concurrency. Specific recommendations include purging workflow runtime data, translating workflow activity function calls, disabling workflow queue retention, and sizing JVM heaps and Forms memory based on formulas provided.
The document discusses using the MODEL clause in SQL to calculate running totals that group rows together such that the total for each group does not exceed a given threshold. An example is provided that models transaction data from different sites by calculating a running total and grouping sites together in the model where the running total does not exceed 65,000. The results show the start site, end site, and maximum running total for each group.
This document provides 50 tips for boosting MySQL performance. It begins with introductions and outlines the program agenda which includes introductions, presenting the 50 performance tips, and a question and answer section. The tips cover various aspects of optimizing MySQL performance including hardware setup, operating system configuration, MySQL configuration settings, query and index optimization, and monitoring.
This document provides a list of interview questions for an Oracle DBA with 3+ years of experience. It covers basic, moderate, advanced, and master level questions. The basic section includes questions about default passwords, connecting to Oracle, and using clients like SQL*Plus. The moderate section covers topics like PFILE vs SPFILE and Data Pump. The advanced section includes questions about background processes, views, and shutdown modes. The master section contains very specific questions even an experienced DBA may struggle with.
This document provides an overview of how to use various Oracle performance monitoring and diagnostic tools like ASH, AWR, and SQL Monitor to analyze and troubleshoot performance issues. It begins with introductions and background on the speaker. It then demonstrates how to generate and interpret reports from these tools using the Oracle Enterprise Manager console and command line. It provides examples of querying ASH data directly and using tools like Compare ADDM and SQL Monitor. The document aims to help users quickly understand performance problems by leveraging these built-in Oracle performance diagnostics.
Graal is a dynamic meta-circular research compiler for Java that is designed for extensibility and modularity. One of its main distinguishing elements is the handling of optimistic assumptions obtained via profiling feedback and the representation of deoptimization guards in the compiled code. Truffle is a self-optimizing runtime system on top of Graal that uses partial evaluation to derive compiled code from interpreters. Truffle is suitable for creating high-performance implementations for dynamic languages with only moderate effort. The presentation includes a description of the Truffle multi-language API and performance comparisons within the industry of current prototype Truffle language implementations (JavaScript, Ruby, and R). Both Graal and Truffle are open source and form themselves research platforms in the area of virtual machine and programming language implementation (http://openjdk.java.net/projects/graal/).
The document discusses symbolic representations of time series data using techniques like SAX (Symbolic Aggregate approXimation). It provides details on:
- Representing time series as sequences of time-value pairs that can be segmented into windows and represented by symbols
- Using techniques like SAX to reduce time series data to symbols from a finite symbol space, allowing for dimensionality reduction and efficient storage and processing.
- The SAX algorithm which discretizes time series windows based on breaking points from a Gaussian distribution to map windows to symbols while preserving distances between time series.
The document discusses using Statspack and AWR (Automatic Workload Repository) to analyze SQL performance and identify poorly performing queries. It provides examples of Statspack reports and how to interpret them to find SQL statements that are doing full table scans, experiencing buffer cache misses, or are inefficient due to lack of bind variables. The document also discusses how to identify SQL statements that are causing excessive sorting.
This document discusses various MySQL performance metrics that are important to measure from within the database, operating system, and application. It outlines key InnoDB internal structures like the buffer pool and log system. Specific metrics that provide insight into buffer pool usage, page churn, and log writes are highlighted. Optimizing the working set size and ensuring sufficient free space in the log files are important factors for performance.
Why is my_oracle_e-biz_database_slow_a_million_dollar_questionAjith Narayanan
The document discusses analyzing the system capacity of the database and middle tiers for an Oracle E-Business Suite environment. It covers various statistical methods for analyzing the database tier capacity, including simple math models using CPU and memory metrics, linear regression analysis of logical reads versus CPU utilization, and queuing theory models. It also provides recommendations for analyzing the middle tier, such as checking the application server access logs for errors, tuning JDBC settings, sizing the concurrent managers correctly, and analyzing long-running concurrent programs. The document aims to help understand if the system is properly sized to serve the workload by applying these different analytical techniques.
Performance Tuning With Oracle ASH and AWR. Part 1 How And Whatudaymoogala
The document discusses various techniques for identifying and analyzing SQL performance issues in an Oracle database, including gathering diagnostic data from AWR reports, ASH reports, SQL execution plans, and real-time SQL monitoring reports. It provides an overview of how to use these tools to understand what is causing performance problems by identifying what is slow, quantifying the impact, determining the component involved, and analyzing the root cause.
The document discusses performance monitoring tools Automatic Workload Repository (AWR) and Active Session History (ASH) in Oracle Database 12c. It provides a brief history of AWR and ASH and describes how they are used to capture database performance metrics. The document also summarizes various reports available through AWR and ASH and how they can be accessed through Oracle Enterprise Manager and command line interfaces. Examples of queries are provided to analyze wait events, time spent in SQL, I/O and other activities from the data collected in AWR and ASH.
This presentation focusses on all the Oracle RAC 12c Rel. 2 related features that ensure continuous availability of the applications using an Oracle RAC database for High Availability.
The document discusses various performance tuning concepts in Oracle including CPU time, database time, reading Statspack/AWR reports, parse CPU to parse elapsed ratio, execute to parse ratio, latches, and wait events. It provides explanations and examples for each concept to help understand how to analyze performance issues from monitoring reports.
The document discusses analyzing database systems using a 3D method for performance analysis. It introduces the 3D method, which looks at performance from the perspectives of the operating system (OS), Oracle database, and applications. The 3D method provides a holistic view of the system that can help identify issues and direct solutions. It also covers topics like time-based analysis in Oracle, how wait events are classified, and having a diagnostic framework for quick troubleshooting using tools like the Automatic Workload Repository report.
This document provides an overview of performance monitoring capabilities in Oracle Database 12c and Enterprise Manager 13c. It discusses the Automatic Workload Repository (AWR) and Active Session History (ASH), which capture database performance statistics. The document outlines changes and enhancements to AWR and ASH in areas like in-memory, manageability reporting, and usability. It also discusses related features like the AWR warehouse and SQL Monitor.
This is the presentation on ASH that I did with Graham Wood at RMOUG 2014 and that represents the final best effort to capture essential and advanced ASH content as started in a presentation Uri Shaft and I gave at a small conference in Denmark sometime in 2012 perhaps. The presentation is also available publicly through the RMOUG website, so I felt at liberty to post it myself here. If it disappears it would likely be because I have been asked to remove it by Oracle.
This document provides an overview of Automatic Workload Repository (AWR) and Active Session History (ASH) in Oracle databases. AWR collects workload statistics and creates snapshots of the database to be used for performance monitoring. ASH samples active database sessions every second. Together, AWR and ASH provide historical performance data that can be queried and analyzed. The document discusses how to generate and interpret various AWR and ASH reports to identify top SQL, sessions, waits, and troubleshoot performance issues.
Getting optimal performance from oracle e-business suite presentationBerry Clemens
The document provides guidance on optimizing performance of the Oracle E-Business Suite applications tier. It recommends staying current with the latest release updates and family packs. It also provides tips on optimizing logging settings, workflow processes, Forms processes, JVM processes, and sizing the middle tier for concurrency. Specific recommendations include purging workflow runtime data, translating workflow activity function calls, disabling workflow queue retention, and sizing JVM heaps and Forms memory based on formulas provided.
The document discusses using the MODEL clause in SQL to calculate running totals that group rows together such that the total for each group does not exceed a given threshold. An example is provided that models transaction data from different sites by calculating a running total and grouping sites together in the model where the running total does not exceed 65,000. The results show the start site, end site, and maximum running total for each group.
This document provides 50 tips for boosting MySQL performance. It begins with introductions and outlines the program agenda which includes introductions, presenting the 50 performance tips, and a question and answer section. The tips cover various aspects of optimizing MySQL performance including hardware setup, operating system configuration, MySQL configuration settings, query and index optimization, and monitoring.
This document provides a list of interview questions for an Oracle DBA with 3+ years of experience. It covers basic, moderate, advanced, and master level questions. The basic section includes questions about default passwords, connecting to Oracle, and using clients like SQL*Plus. The moderate section covers topics like PFILE vs SPFILE and Data Pump. The advanced section includes questions about background processes, views, and shutdown modes. The master section contains very specific questions even an experienced DBA may struggle with.
This document provides an overview of how to use various Oracle performance monitoring and diagnostic tools like ASH, AWR, and SQL Monitor to analyze and troubleshoot performance issues. It begins with introductions and background on the speaker. It then demonstrates how to generate and interpret reports from these tools using the Oracle Enterprise Manager console and command line. It provides examples of querying ASH data directly and using tools like Compare ADDM and SQL Monitor. The document aims to help users quickly understand performance problems by leveraging these built-in Oracle performance diagnostics.
Graal is a dynamic meta-circular research compiler for Java that is designed for extensibility and modularity. One of its main distinguishing elements is the handling of optimistic assumptions obtained via profiling feedback and the representation of deoptimization guards in the compiled code. Truffle is a self-optimizing runtime system on top of Graal that uses partial evaluation to derive compiled code from interpreters. Truffle is suitable for creating high-performance implementations for dynamic languages with only moderate effort. The presentation includes a description of the Truffle multi-language API and performance comparisons within the industry of current prototype Truffle language implementations (JavaScript, Ruby, and R). Both Graal and Truffle are open source and form themselves research platforms in the area of virtual machine and programming language implementation (http://openjdk.java.net/projects/graal/).
The document discusses symbolic representations of time series data using techniques like SAX (Symbolic Aggregate approXimation). It provides details on:
- Representing time series as sequences of time-value pairs that can be segmented into windows and represented by symbols
- Using techniques like SAX to reduce time series data to symbols from a finite symbol space, allowing for dimensionality reduction and efficient storage and processing.
- The SAX algorithm which discretizes time series windows based on breaking points from a Gaussian distribution to map windows to symbols while preserving distances between time series.
The document discusses using Statspack and AWR (Automatic Workload Repository) to analyze SQL performance and identify poorly performing queries. It provides examples of Statspack reports and how to interpret them to find SQL statements that are doing full table scans, experiencing buffer cache misses, or are inefficient due to lack of bind variables. The document also discusses how to identify SQL statements that are causing excessive sorting.
This document discusses various MySQL performance metrics that are important to measure from within the database, operating system, and application. It outlines key InnoDB internal structures like the buffer pool and log system. Specific metrics that provide insight into buffer pool usage, page churn, and log writes are highlighted. Optimizing the working set size and ensuring sufficient free space in the log files are important factors for performance.
Why is my_oracle_e-biz_database_slow_a_million_dollar_questionAjith Narayanan
The document discusses analyzing the system capacity of the database and middle tiers for an Oracle E-Business Suite environment. It covers various statistical methods for analyzing the database tier capacity, including simple math models using CPU and memory metrics, linear regression analysis of logical reads versus CPU utilization, and queuing theory models. It also provides recommendations for analyzing the middle tier, such as checking the application server access logs for errors, tuning JDBC settings, sizing the concurrent managers correctly, and analyzing long-running concurrent programs. The document aims to help understand if the system is properly sized to serve the workload by applying these different analytical techniques.
Performance Tuning With Oracle ASH and AWR. Part 1 How And Whatudaymoogala
The document discusses various techniques for identifying and analyzing SQL performance issues in an Oracle database, including gathering diagnostic data from AWR reports, ASH reports, SQL execution plans, and real-time SQL monitoring reports. It provides an overview of how to use these tools to understand what is causing performance problems by identifying what is slow, quantifying the impact, determining the component involved, and analyzing the root cause.
The document discusses performance monitoring tools Automatic Workload Repository (AWR) and Active Session History (ASH) in Oracle Database 12c. It provides a brief history of AWR and ASH and describes how they are used to capture database performance metrics. The document also summarizes various reports available through AWR and ASH and how they can be accessed through Oracle Enterprise Manager and command line interfaces. Examples of queries are provided to analyze wait events, time spent in SQL, I/O and other activities from the data collected in AWR and ASH.
This presentation focusses on all the Oracle RAC 12c Rel. 2 related features that ensure continuous availability of the applications using an Oracle RAC database for High Availability.
This document provides information about Kyle Hailey's background and expertise in Oracle performance tuning. It discusses his experience working with Oracle since 1990 and his focus on simplifying performance information for DBAs. It promotes tools like OEM ASHMON/SASH and the DB Optimizer for interactively exploring performance data in a clear, understandable way.
1. The document discusses using graphics and data visualization to improve understanding of database performance issues and SQL tuning. It provides examples of how visualizations can clearly show relationships in complex SQL queries and data that are difficult to understand from text or code alone.
2. Key steps in visual SQL tuning are laid out, including drawing tables as nodes, joins as connection lines, and filters as markings on tables. This helps identify optimization opportunities like missing indexes or stale statistics.
3. The document emphasizes that a lack of clarity in visualizing complex data and queries can have devastating consequences, while graphics enable easy understanding and effective problem-solving.
This document discusses various types of I/O waits in an Oracle database including sequential reads, scattered reads, parallel reads, and reads by other sessions. It provides details on identifying and troubleshooting I/O issues through metrics like average read times, buffer cache hit ratios, and top SQL statements causing I/O. Specific solutions covered include right-sizing the SGA and buffer cache, tuning high I/O SQL, and ensuring adequate disk throughput.
Pivot query on Automatic Workload Repository _ OraDBA.pdfcookie1969
The document discusses using a pivot query to summarize metrics stored in Oracle's Automatic Workload Repository (AWR) in a more readable tabular format. It shows how to write a basic query to retrieve the raw name/value pairs from AWR and then uses a pivot query to transform the rows of data into columns with the metric names as column headers and maximum values across snapshots as the values. This allows the summarized data to be more easily used for creating charts and reports.
This document discusses using machine learning to detect poor database performance. Traditional rule-based monitoring does not scale well to handle hundreds or thousands of databases. Machine learning can help automate the detection of known poor performance issues before problems arise. The presentation will cover how to apply machine learning techniques to quickly and automatically identify recognized performance situations using data from tools like AWR and ASH. It will also discuss resources for learning machine learning through courses and training from OraPub.
This document discusses various ways to help the Oracle Cost Based Optimizer (CBO) generate better execution plans. It covers topics like misused initialization parameters, system statistics, extended statistics, and defining selectivity and cost for PL/SQL functions. The speaker is Jože Senegačnik, an Oracle ACE Director and expert with over 21 years of experience with Oracle.
The document discusses using the sar (system activity reporter) utility to analyze I/O performance issues. Sar measures CPU, memory, and disk utilization over time. It was used to identify a bottleneck causing high I/O wait times in a payroll system. Sar reported redo log disks at 80-100% utilization. Upgrading an SRDF link from 2 to 4 links resolved the issue by improving redo log synchronization between disk arrays. The document provides steps for collecting sar data, loading it into a database, and graphing it to analyze system performance over time.
The document discusses execution plans in Oracle, including what they are, how to view them using tools like DBMS_XPLAN, details contained in plans and how to interpret them, tips for tuning plans such as gathering statistics and adding indexes, and provides an example case study of tuning a SQL statement that was performing a full table scan through the use of indexes.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
40. About Us
I am…
•Oracle ACE Director
•Sr. Technical Consultant, Enkitec
Enkitec is…
Oracle Platinum Partner specializing in:
Oracle Exadata
Oracle Database, including RAC
Oracle Database Performance Tuning
Oracle APEX and so much more!
41. The Consultant’s Challenge
“Hybrid” workload environment:
Transactional, ETL, Reporting
Upgraded to 11g in previous year
Consistent degradation since upgrade
ETL down from 400 “businesses” per hour to 2-300
ETL code review and enhancement in works
“What can you do for us now outside of that effort?”
Goal: Load 700 businesses per hour!!
42. Oracle Tools of the Trade
AWR Reports:
First offered by onsite DBA, always available
“Averaging” effect of large snapshot times hiding issues
ASH Reports:
Help identify problem times with finer granularity
Target reports to problem times, gives clearer picture
Enterprise Manager 12c
Used to enhance ASH findings and do further research
Top Activity, SQL Details, ASH Analytics
43. Why AWR Wasn’t the Answer
The “problem” was not visible
We expect to use CPU and to do I/O
AWR snapshot timing too course, problems average out
Issue not workload change or data volumes, ETL just
degrading over time
44. Why ASH Was…
Zero-in on problem time
More definitive breakdown of data
Exposed competing PL/SQL procedures
Session level information
Interested in impacts not frequencies
45. ASH Report Targets CPU Spike
Breakdown by the minute, by interval
CPU spikes in four
minute period
46. ASH Top SQL Exposes Oddities
STATS_ADMIN??
SQL Analyze??
} What does
this SQL
Originate
from?
47. EM Exposes problem SQL Profiles
EM Search SQL found multiple plans for critical ETL
statements with vastly different performance (?)
Click-through bad plan to expose existence of SQL Profile
Oops, profiles
are supposed
to fix plans!
48. What Caused This?
High profile environment, very sensitive to change
Stats collection using custom wrapper over deprecated
Oracle package dating back prior to 9i (DBMS_ADMIN)
Also using 11g stats collection (DBMS_STATS)
DBMS_ADMIN was deprecated for a reason!
Analysis of object stats providing poor data to CBO
Other automated maintenance window tasks were
expensive and competing for resources at exactly the wrong
time (i.e. ETL time)
49. Steps to Correct
Migrated to DBMS_STATS for all stats collection
Disable jobs using custom wrapper over DBMS_ADMIN
Removed SQL Profiles impacting bad ETL plans
Additional steps taken:
Migrated select b-tree indexes to bitmap indexes, also much
needed disk space.
Continued to review ASH, AWR and Session SQL
performance for improvement.