The aim of the query optimizer is not only to provide the SQL engine execution plans that describe how to process data but also, and more importantly, to provide efficient execution plans. Even though this central component of Oracle Database is enhanced with every new release, there are always cases where it generates suboptimal execution plans. The aim of this presentation is to describe and demonstrate how, with Adaptive Query Optimization, which is a set of features available as of Oracle Database 12c, the query optimizer is able to generate less suboptimal execution plans.
Even though 12.1.0.2 is "only" a patch set, it introduces a number of very interesting performance features. In-Memory Column Store is the most well known in this area. But, be aware, a number of additional features that, for example, helps optimizing the physical storage and the caching of data are also available. The aim of this session is to explain and demonstrate how these new features work.
Oracle Database In-Memory introduces a number of new features in the query optimizer. The aim of this presentation is to describe and demonstrate how they work.
Indexes: Structure, Splits and Free Space Management InternalsChristian Antognini
The document discusses the internal structure and management of Oracle database indexes. It describes how B-tree indexes are structured, including the use of branch blocks and leaf blocks. It also covers concepts like index keys, splits that occur as indexes grow, free space management, and techniques for reorganizing indexes like rebuilding and coalescing. Bitmap indexes are also discussed, noting they use a compressed bitmap in their internal keys. Finally, some common myths about indexing are debunked, such as the idea that indexes need regular rebuilds to remain balanced.
The document discusses adaptive query optimization in Oracle 12c. Key points include:
- In 12c, adaptive plans allow the execution plan to change at runtime based on statistics collected, such as switching from a hash join to a nested loops join.
- During the first execution, a statistics collector is inserted and the plan is changed. SQL plan directives are then created.
- For subsequent executions, the information from the initial execution is used to automatically re-optimize the plan, improving performance over time.
This document provides information about new features and improvements in MySQL 8.0. It discusses enhancements to JSON functionality including new functions and indexing support. It also summarizes added functionality for GIS, UUIDs, common table expressions, window functions, and other query optimizations. The document notes that MySQL 8.0 uses utf8mb4 as the default character set for improved Unicode support and performance.
The document discusses the top 12 new features of Oracle 12c, including improved column defaults that allow identity columns, increased size limits for VARCHAR columns up to 32K, improved queries for top-N results using ROW LIMIT clauses, and adaptive execution plans that allow the optimizer to choose alternative execution plans based on statistics gathered during the first execution. Temporary undo segments are also introduced to avoid generating redo for temporary table operations.
This document discusses techniques for optimizing SQL performance in Oracle databases. It covers topics like optimizing the optimizer itself through configuration changes and statistics collection, detecting poorly performing SQL, and methods for improving plans such as indexing, partitioning, hints and baselines. The goal is to maximize the optimizer's accuracy and ability to handle edge cases, while also knowing how to intervene when needed to capture fugitive SQL and ensure acceptable performance.
Even though 12.1.0.2 is "only" a patch set, it introduces a number of very interesting performance features. In-Memory Column Store is the most well known in this area. But, be aware, a number of additional features that, for example, helps optimizing the physical storage and the caching of data are also available. The aim of this session is to explain and demonstrate how these new features work.
Oracle Database In-Memory introduces a number of new features in the query optimizer. The aim of this presentation is to describe and demonstrate how they work.
Indexes: Structure, Splits and Free Space Management InternalsChristian Antognini
The document discusses the internal structure and management of Oracle database indexes. It describes how B-tree indexes are structured, including the use of branch blocks and leaf blocks. It also covers concepts like index keys, splits that occur as indexes grow, free space management, and techniques for reorganizing indexes like rebuilding and coalescing. Bitmap indexes are also discussed, noting they use a compressed bitmap in their internal keys. Finally, some common myths about indexing are debunked, such as the idea that indexes need regular rebuilds to remain balanced.
The document discusses adaptive query optimization in Oracle 12c. Key points include:
- In 12c, adaptive plans allow the execution plan to change at runtime based on statistics collected, such as switching from a hash join to a nested loops join.
- During the first execution, a statistics collector is inserted and the plan is changed. SQL plan directives are then created.
- For subsequent executions, the information from the initial execution is used to automatically re-optimize the plan, improving performance over time.
This document provides information about new features and improvements in MySQL 8.0. It discusses enhancements to JSON functionality including new functions and indexing support. It also summarizes added functionality for GIS, UUIDs, common table expressions, window functions, and other query optimizations. The document notes that MySQL 8.0 uses utf8mb4 as the default character set for improved Unicode support and performance.
The document discusses the top 12 new features of Oracle 12c, including improved column defaults that allow identity columns, increased size limits for VARCHAR columns up to 32K, improved queries for top-N results using ROW LIMIT clauses, and adaptive execution plans that allow the optimizer to choose alternative execution plans based on statistics gathered during the first execution. Temporary undo segments are also introduced to avoid generating redo for temporary table operations.
This document discusses techniques for optimizing SQL performance in Oracle databases. It covers topics like optimizing the optimizer itself through configuration changes and statistics collection, detecting poorly performing SQL, and methods for improving plans such as indexing, partitioning, hints and baselines. The goal is to maximize the optimizer's accuracy and ability to handle edge cases, while also knowing how to intervene when needed to capture fugitive SQL and ensure acceptable performance.
These are the slides used by Dilip Kumar of EnterpriseDB for his presentation at pgDay Asia 2016, Singpaore. He talked about scalability and performance improvements in PostgreSQL v9.6, which is expected to be released in Dec/2016 - Jan/2017.
This document discusses Oracle query optimizer concepts like selectivity, cardinality, and object statistics. It provides examples of how the optimizer estimates cardinality based on statistics values like number of rows, distinct values, density and nulls. It also shows how index statistics like clustering factor, leaf blocks impact the choice between an index scan or full table scan.
Presentation v mware roi tco calculatorsolarisyourep
This document provides an overview of VMware's ROI/TCO calculator for analyzing the costs and benefits of virtualizing server infrastructure with VMware vSphere. The calculator allows users to model various scenarios including expected future savings, past realized savings, or a mix. It covers areas like server hardware, storage, networking, power and cooling, administration labor, and downtime. Users work through a series of modules, entering configuration details and selecting VMware products. The calculator then produces estimates of return on investment, total cost of ownership, and payback period.
Watch the full webinar at: http://embt.co/1pb4Zb4
This presentation is a must-see for anyone interested in Oracle 12! Dan is an Oracle ACE Director and has assembled this presentation with fresh and inside information from Oracle Corp and OOW13. Dan has pulled his top Oracle 12 features from the plethora of new features available and documented in his user group presentations "Oracle 12c New Features for Developers" and "Oracle 12c New Features for DBA's".
Top 10 features will include:
New SQL Syntax
New SQL and PL/SQL Limits
Pluggable Database
New Packages
Deprecated Features
New SQL Tuning Features
This presentation covers new SQL & PL/SQL syntax and options, the container DB of course, new SQL optimizer features, deprecated features, hints, and more. If you're supporting applications, then you won't want to miss this webinar!
Accumulo Summit 2015: Performance Models for Apache Accumulo: The Heavy Tail ...Accumulo Summit
Talk Abstract
As with all open-source databases, Accumulo developers often compete between building exciting new features and hacking on performance and stability. As the core features solidify and expand, we see many opportunities to improve performance. An effective methodology for performance improvement is scientific in nature, and follows a well-definite modeling and simulation approach, matching theory to experimentation in an iterative fashion.
Ingest performance is one of the most differentiating characteristics of Accumulo. However, there is much room for improvement for typical ingest-heavy applications. Accumulo supports two mechanisms to bring data in: streaming ingest and bulk ingest. In bulk ingest, the goal is to maximize throughput without constraining latency. Bulk ingest involves creating a set of files that conform to Accumulo's internal RFile format and then registering those files with Accumulo. MapReduce provides a framework for generating, sorting, and storing key/value pairs, which form the primary elements of preparing RFiles for bulk ingest. MapReduce has been used many times over the years to break sorting records, such as Terasort. We can expect it is a reasonable choice for maximizing bulk ingest throughput. However, the theory often proves challenging to implement as there are many performance pitfalls along the way.
In this talk, we dive deep into optimizing MapReduce for Accumulo bulk ingest. We share detailed theoretical and empirical performance models, we discuss techniques for profiling performance, and we suggest reusable techniques for squeezing the maximum performance out of enterprise-grade Accumulo bulk ingest.
Speaker
Chris McCubbin
Director of Data Science, Sqrrl
Chris is the Director of Data Science for Sqrrl. He has extensive experience with the Hadoop ecosystem and applying scientific computation algorithms to real-world datasets. Previously, Chris developed Big Data analysis tools for the Intelligence Community and applied artificial intelligence techniques to unmanned vehicle systems. He holds a MS in Computer Science and BS in Computer Science and Mathematics from the University of Maryland.
This document discusses indexing in Oracle Exadata. It begins by providing background on the speaker and their experience. It then discusses how Exadata storage server software, including hybrid columnar compression and smart flash cache, can accelerate queries. The document shows an example of how a query that previously took minutes can take seconds on Exadata due to smart scans. It discusses how indexes may no longer provide benefits and can even reduce performance on Exadata. The document considers whether indexes should be dropped or if the decision is more complex. It analyzes the costs of using indexes versus full table scans on Exadata. Finally, it provides examples to illustrate smart scans.
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentalsJohn Beresniewicz
RMOUG 2020 abstract:
This session will cover core concepts for Oracle performance analysis first introduced in Oracle 10g and forming the backbone of many features in the Diagnostic and Tuning packs. The presentation will cover the theoretical basis and meaning of these concepts, as well as illustrate how they are fundamental to many user-facing features in both the database itself and Enterprise Manager.
- Oracle Database 12c introduced several new features for DBAs including adaptive execution plans, PGA_AGGREGATE_LIMIT parameter, enhanced statistics gathering options, renaming datafiles online, FETCH FIRST clause for limiting rows, table restoration using RMAN, SQL statements in RMAN, preupgrade and parallel upgrade utilities, and real-time ADDM analysis.
- Adaptive execution plans allow queries to switch plans during execution if row counts differ significantly from estimates. PGA_AGGREGATE_LIMIT provides a hard limit on PGA memory usage to prevent sessions from consuming too much. Enhanced statistics options include system statistics for Exadata, concurrent collection, and new histogram types.
Whitepaper: Exadata Consolidation Success StoryKristofferson A
1. The document discusses database and server consolidation using Oracle Exadata and describes the challenges of managing highly consolidated environments to ensure quality of service.
2. It outlines a 4-step process for accurate provisioning and capacity planning using a tool called the Provisioning Worksheet: collecting database details, defining the target Exadata hardware capacity, creating a provisioning plan, and reviewing resource utilization.
3. The process relies on basic capacity planning to ensure workload requirements fit available capacity. Database CPU and storage requirements are gathered, a target Exadata configuration is set, databases are mapped to nodes in the plan, and final utilization is summarized to identify any capacity shortfalls.
Indexing Strategies for Oracle Databases - Beyond the Create Index StatementSean Scott
B-tree indexes are the most common type of index and order data within the index in branches and leaves. Composite indexes consist of more than one column to improve performance. When choosing indexes, consider columns frequently used in queries, primary keys, and foreign keys. Index maintenance includes rebuilding, coalescing, and shrinking indexes.
Create your oracle_apps_r12_lab_with_less_than_us1000Ajith Narayanan
This document summarizes a presentation on how to create an Oracle Apps R12 lab with less than $1000. It discusses designing a multi-tier architecture for Oracle Apps R12 on a Linux platform using inexpensive hardware. Specifically, it describes how to set up 5 Dell desktops running Oracle Linux and connected via switches to act as nodes, with a NAS storage device providing shared storage between the nodes. Software components like Oracle Grid Infrastructure, Oracle Database, and Oracle E-Business Suite can then be installed to implement the multi-tier RAC configuration. The presentation provides step-by-step instructions for tasks like preparing the shared storage, installing the various Oracle software components, and configuring the applications tier to use the RAC database.
Crack the complexity of oracle applications r12 workload v2Ajith Narayanan
This document summarizes Ajith Narayanan's presentation on characterizing Oracle Applications R12 workload. The presentation covers instrumentation, collection, classification, measurement, and interpretation of workload data. The goal is to understand workload trends and impacts in order to optimize system resources and performance. Key aspects discussed include identifying workload classes like forms, batches, and self-service applications; measuring resource usage correlated to workload; and using interpretations to make scheduling and tuning decisions.
The document discusses table partitioning and sharding in PostgreSQL as approaches to improve performance and scalability as data volumes grow over time. Table partitioning involves splitting a master table into multiple child tables or partitions based on a partition function to distribute data. Sharding distributes partitions across multiple database servers. The document provides steps to implement table partitioning and sharding in PostgreSQL using the Citus extension to distribute a sample sales table across a master and worker node.
This document discusses database transaction logging and concurrency control in DB2. It covers topics such as locks, isolation levels, deadlocks, snapshots, and transaction logging. It provides information on DB2's use of row-level and table-level locks, lock modes, lock escalation, lock monitoring using snapshots, and the two logging methods of circular logging and archival logging.
Why is my_oracle_e-biz_database_slow_a_million_dollar_questionAjith Narayanan
The document discusses analyzing the system capacity of the database and middle tiers for an Oracle E-Business Suite environment. It covers various statistical methods for analyzing the database tier capacity, including simple math models using CPU and memory metrics, linear regression analysis of logical reads versus CPU utilization, and queuing theory models. It also provides recommendations for analyzing the middle tier, such as checking the application server access logs for errors, tuning JDBC settings, sizing the concurrent managers correctly, and analyzing long-running concurrent programs. The document aims to help understand if the system is properly sized to serve the workload by applying these different analytical techniques.
This document provides 9 hints for optimizing Oracle database performance:
1. Take a methodical and empirical approach to tuning by focusing on root causes, measuring performance before and after changes, and avoiding "silver bullets".
2. Design databases and applications with performance in mind from the beginning.
3. Index wisely by only creating useful indexes that improve performance without excessive overhead.
4. Leverage built-in Oracle tools like DBMS_XPLAN and SQL Trace to measure performance.
5. Tune the optimizer by adjusting parameters and statistics to encourage better execution plans.
6. Focus SQL and PL/SQL tuning on problem queries, joins, sorts, and DML statements.
7. Address
The document discusses best practices for gathering statistics in Oracle databases. It covers how to gather statistics using the DBMS_STATS package, additional types of statistics like column groups and expression statistics, when to gather statistics such as after data loads, and how to improve statistics gathering performance using parallel execution and incremental gathering for partitioned tables.
Understanding Query Optimization with ‘regular’ and ‘Exadata’ OracleGuatemala User Group
The document discusses query optimization with regular Oracle databases and Exadata databases. It explains what happens when a SQL statement is issued, including parsing, optimization, and execution. It describes what an execution plan is and how it can be generated and displayed. It discusses how operations can be offloaded to storage cells on Exadata and factors the optimizer considers for determining a good execution plan.
This document discusses mining the Automatic Workload Repository (AWR) for capacity planning and visualization purposes. It summarizes Karl Arao's presentation on using AWR data to understand system performance over time, identify bottlenecks, and perform capacity planning. Tools mentioned include AWR scripts, visualization techniques like double Y-axis graphs, and statistical methods like linear regression to model relationships between metrics like average active sessions and CPU utilization.
Free Load Testing Tools for Oracle Database – Which One Do I Use?Christian Antognini
It regularly happens to me that for testing purposes I have to generate load on an Oracle Database. The three most common situations leading to such a task are when I need to: assess the performance of a new platform or storage subsystem; verify whether a set of SQL statements executed on a specific environment and/or configuration fulfils the expected performance requirements; perform usability and functionality checks of tools or utilities that require a non-trivial load to be carried out. The aim of this presentation is to introduce the freely available tools that I use, to explain how I use them, and to present real-world use cases of their utilization.
The Features That (maybe) You Didn't Know AboutOren Nakdimon
The Oracle database includes tons of features for developers, and because of that we sometimes miss some of them - good and useful features that many developers either don't know about them or assume they are not supported in their licensed edition or options. This session focuses on such features - introducing them, exploring them, showing when they are useful and how to use them.
These are the slides used by Dilip Kumar of EnterpriseDB for his presentation at pgDay Asia 2016, Singpaore. He talked about scalability and performance improvements in PostgreSQL v9.6, which is expected to be released in Dec/2016 - Jan/2017.
This document discusses Oracle query optimizer concepts like selectivity, cardinality, and object statistics. It provides examples of how the optimizer estimates cardinality based on statistics values like number of rows, distinct values, density and nulls. It also shows how index statistics like clustering factor, leaf blocks impact the choice between an index scan or full table scan.
Presentation v mware roi tco calculatorsolarisyourep
This document provides an overview of VMware's ROI/TCO calculator for analyzing the costs and benefits of virtualizing server infrastructure with VMware vSphere. The calculator allows users to model various scenarios including expected future savings, past realized savings, or a mix. It covers areas like server hardware, storage, networking, power and cooling, administration labor, and downtime. Users work through a series of modules, entering configuration details and selecting VMware products. The calculator then produces estimates of return on investment, total cost of ownership, and payback period.
Watch the full webinar at: http://embt.co/1pb4Zb4
This presentation is a must-see for anyone interested in Oracle 12! Dan is an Oracle ACE Director and has assembled this presentation with fresh and inside information from Oracle Corp and OOW13. Dan has pulled his top Oracle 12 features from the plethora of new features available and documented in his user group presentations "Oracle 12c New Features for Developers" and "Oracle 12c New Features for DBA's".
Top 10 features will include:
New SQL Syntax
New SQL and PL/SQL Limits
Pluggable Database
New Packages
Deprecated Features
New SQL Tuning Features
This presentation covers new SQL & PL/SQL syntax and options, the container DB of course, new SQL optimizer features, deprecated features, hints, and more. If you're supporting applications, then you won't want to miss this webinar!
Accumulo Summit 2015: Performance Models for Apache Accumulo: The Heavy Tail ...Accumulo Summit
Talk Abstract
As with all open-source databases, Accumulo developers often compete between building exciting new features and hacking on performance and stability. As the core features solidify and expand, we see many opportunities to improve performance. An effective methodology for performance improvement is scientific in nature, and follows a well-definite modeling and simulation approach, matching theory to experimentation in an iterative fashion.
Ingest performance is one of the most differentiating characteristics of Accumulo. However, there is much room for improvement for typical ingest-heavy applications. Accumulo supports two mechanisms to bring data in: streaming ingest and bulk ingest. In bulk ingest, the goal is to maximize throughput without constraining latency. Bulk ingest involves creating a set of files that conform to Accumulo's internal RFile format and then registering those files with Accumulo. MapReduce provides a framework for generating, sorting, and storing key/value pairs, which form the primary elements of preparing RFiles for bulk ingest. MapReduce has been used many times over the years to break sorting records, such as Terasort. We can expect it is a reasonable choice for maximizing bulk ingest throughput. However, the theory often proves challenging to implement as there are many performance pitfalls along the way.
In this talk, we dive deep into optimizing MapReduce for Accumulo bulk ingest. We share detailed theoretical and empirical performance models, we discuss techniques for profiling performance, and we suggest reusable techniques for squeezing the maximum performance out of enterprise-grade Accumulo bulk ingest.
Speaker
Chris McCubbin
Director of Data Science, Sqrrl
Chris is the Director of Data Science for Sqrrl. He has extensive experience with the Hadoop ecosystem and applying scientific computation algorithms to real-world datasets. Previously, Chris developed Big Data analysis tools for the Intelligence Community and applied artificial intelligence techniques to unmanned vehicle systems. He holds a MS in Computer Science and BS in Computer Science and Mathematics from the University of Maryland.
This document discusses indexing in Oracle Exadata. It begins by providing background on the speaker and their experience. It then discusses how Exadata storage server software, including hybrid columnar compression and smart flash cache, can accelerate queries. The document shows an example of how a query that previously took minutes can take seconds on Exadata due to smart scans. It discusses how indexes may no longer provide benefits and can even reduce performance on Exadata. The document considers whether indexes should be dropped or if the decision is more complex. It analyzes the costs of using indexes versus full table scans on Exadata. Finally, it provides examples to illustrate smart scans.
DB Time, Average Active Sessions, and ASH Math - Oracle performance fundamentalsJohn Beresniewicz
RMOUG 2020 abstract:
This session will cover core concepts for Oracle performance analysis first introduced in Oracle 10g and forming the backbone of many features in the Diagnostic and Tuning packs. The presentation will cover the theoretical basis and meaning of these concepts, as well as illustrate how they are fundamental to many user-facing features in both the database itself and Enterprise Manager.
- Oracle Database 12c introduced several new features for DBAs including adaptive execution plans, PGA_AGGREGATE_LIMIT parameter, enhanced statistics gathering options, renaming datafiles online, FETCH FIRST clause for limiting rows, table restoration using RMAN, SQL statements in RMAN, preupgrade and parallel upgrade utilities, and real-time ADDM analysis.
- Adaptive execution plans allow queries to switch plans during execution if row counts differ significantly from estimates. PGA_AGGREGATE_LIMIT provides a hard limit on PGA memory usage to prevent sessions from consuming too much. Enhanced statistics options include system statistics for Exadata, concurrent collection, and new histogram types.
Whitepaper: Exadata Consolidation Success StoryKristofferson A
1. The document discusses database and server consolidation using Oracle Exadata and describes the challenges of managing highly consolidated environments to ensure quality of service.
2. It outlines a 4-step process for accurate provisioning and capacity planning using a tool called the Provisioning Worksheet: collecting database details, defining the target Exadata hardware capacity, creating a provisioning plan, and reviewing resource utilization.
3. The process relies on basic capacity planning to ensure workload requirements fit available capacity. Database CPU and storage requirements are gathered, a target Exadata configuration is set, databases are mapped to nodes in the plan, and final utilization is summarized to identify any capacity shortfalls.
Indexing Strategies for Oracle Databases - Beyond the Create Index StatementSean Scott
B-tree indexes are the most common type of index and order data within the index in branches and leaves. Composite indexes consist of more than one column to improve performance. When choosing indexes, consider columns frequently used in queries, primary keys, and foreign keys. Index maintenance includes rebuilding, coalescing, and shrinking indexes.
Create your oracle_apps_r12_lab_with_less_than_us1000Ajith Narayanan
This document summarizes a presentation on how to create an Oracle Apps R12 lab with less than $1000. It discusses designing a multi-tier architecture for Oracle Apps R12 on a Linux platform using inexpensive hardware. Specifically, it describes how to set up 5 Dell desktops running Oracle Linux and connected via switches to act as nodes, with a NAS storage device providing shared storage between the nodes. Software components like Oracle Grid Infrastructure, Oracle Database, and Oracle E-Business Suite can then be installed to implement the multi-tier RAC configuration. The presentation provides step-by-step instructions for tasks like preparing the shared storage, installing the various Oracle software components, and configuring the applications tier to use the RAC database.
Crack the complexity of oracle applications r12 workload v2Ajith Narayanan
This document summarizes Ajith Narayanan's presentation on characterizing Oracle Applications R12 workload. The presentation covers instrumentation, collection, classification, measurement, and interpretation of workload data. The goal is to understand workload trends and impacts in order to optimize system resources and performance. Key aspects discussed include identifying workload classes like forms, batches, and self-service applications; measuring resource usage correlated to workload; and using interpretations to make scheduling and tuning decisions.
The document discusses table partitioning and sharding in PostgreSQL as approaches to improve performance and scalability as data volumes grow over time. Table partitioning involves splitting a master table into multiple child tables or partitions based on a partition function to distribute data. Sharding distributes partitions across multiple database servers. The document provides steps to implement table partitioning and sharding in PostgreSQL using the Citus extension to distribute a sample sales table across a master and worker node.
This document discusses database transaction logging and concurrency control in DB2. It covers topics such as locks, isolation levels, deadlocks, snapshots, and transaction logging. It provides information on DB2's use of row-level and table-level locks, lock modes, lock escalation, lock monitoring using snapshots, and the two logging methods of circular logging and archival logging.
Why is my_oracle_e-biz_database_slow_a_million_dollar_questionAjith Narayanan
The document discusses analyzing the system capacity of the database and middle tiers for an Oracle E-Business Suite environment. It covers various statistical methods for analyzing the database tier capacity, including simple math models using CPU and memory metrics, linear regression analysis of logical reads versus CPU utilization, and queuing theory models. It also provides recommendations for analyzing the middle tier, such as checking the application server access logs for errors, tuning JDBC settings, sizing the concurrent managers correctly, and analyzing long-running concurrent programs. The document aims to help understand if the system is properly sized to serve the workload by applying these different analytical techniques.
This document provides 9 hints for optimizing Oracle database performance:
1. Take a methodical and empirical approach to tuning by focusing on root causes, measuring performance before and after changes, and avoiding "silver bullets".
2. Design databases and applications with performance in mind from the beginning.
3. Index wisely by only creating useful indexes that improve performance without excessive overhead.
4. Leverage built-in Oracle tools like DBMS_XPLAN and SQL Trace to measure performance.
5. Tune the optimizer by adjusting parameters and statistics to encourage better execution plans.
6. Focus SQL and PL/SQL tuning on problem queries, joins, sorts, and DML statements.
7. Address
The document discusses best practices for gathering statistics in Oracle databases. It covers how to gather statistics using the DBMS_STATS package, additional types of statistics like column groups and expression statistics, when to gather statistics such as after data loads, and how to improve statistics gathering performance using parallel execution and incremental gathering for partitioned tables.
Understanding Query Optimization with ‘regular’ and ‘Exadata’ OracleGuatemala User Group
The document discusses query optimization with regular Oracle databases and Exadata databases. It explains what happens when a SQL statement is issued, including parsing, optimization, and execution. It describes what an execution plan is and how it can be generated and displayed. It discusses how operations can be offloaded to storage cells on Exadata and factors the optimizer considers for determining a good execution plan.
This document discusses mining the Automatic Workload Repository (AWR) for capacity planning and visualization purposes. It summarizes Karl Arao's presentation on using AWR data to understand system performance over time, identify bottlenecks, and perform capacity planning. Tools mentioned include AWR scripts, visualization techniques like double Y-axis graphs, and statistical methods like linear regression to model relationships between metrics like average active sessions and CPU utilization.
Free Load Testing Tools for Oracle Database – Which One Do I Use?Christian Antognini
It regularly happens to me that for testing purposes I have to generate load on an Oracle Database. The three most common situations leading to such a task are when I need to: assess the performance of a new platform or storage subsystem; verify whether a set of SQL statements executed on a specific environment and/or configuration fulfils the expected performance requirements; perform usability and functionality checks of tools or utilities that require a non-trivial load to be carried out. The aim of this presentation is to introduce the freely available tools that I use, to explain how I use them, and to present real-world use cases of their utilization.
The Features That (maybe) You Didn't Know AboutOren Nakdimon
The Oracle database includes tons of features for developers, and because of that we sometimes miss some of them - good and useful features that many developers either don't know about them or assume they are not supported in their licensed edition or options. This session focuses on such features - introducing them, exploring them, showing when they are useful and how to use them.
Oracle ACFS High Availability NFS Services (HANFS)Anju Garg
Oracle ACFS High Availability NFS Services (HANFS) allows Oracle ACFS clusters to configure highly available NFS servers. HANFS exposes NFS exports through Highly Available VIPs (HAVIPs) so that if a node hosting an export fails, the HAVIP and corresponding export will fail over to another node, providing uninterrupted NFS service. The document discusses configuring HANFS resources including ACFS file systems, HAVIPs, and ExportFS resources and verifying access to an exported file system from an NFS client.
Policy based cluster management in oracle 12c Anju Garg
Oracle Grid Infrastructure 12c enhances the use of server pools by introducing server attributes e.g. memory, CPU_count etc. which can be associated with each server. Server pools can be configured so that their members belong to a category of servers, which share a particular set of attributes. Moreover, administrators can maintain a library of policies and switch between them as required rather than manually reallocating servers to various server pools based on workload. This paper discusses in detail the new features of policy based cluster management in 12c.
A presentation about new features and enhancements related to indexes and indexing in Oracle 12c.
See also the related post: http://db-oriented.com/2015/07/03/indexes-and-indexing-in-oracle-12c
A session in the DevNet Zone at Cisco Live, Berlin. At the moment, this is the DoE: DevOps of Everything. DevOps is about culture first but many people take shortcuts to tools and workflow. They forgot the essence of DevOps which is about people and not only from Dev to Ops. In this session, we will show you how we are currently building a DevOps culture with a focus on continuous improvement.
On version 12c Oracle introduced new features to allow Adaptive optimizations: Adaptive Plans and Adaptive Statistics. After a quick presentation of concepts, this session will explore the interaction of these features with other performance management techniques using examples, like SPM and SQL profiles. Attendees will get an updated picture of tools available to troubleshoot performance issues, and how to get the most of these new features.
Part1 of SQL Tuning Workshop - Understanding the OptimizerMaria Colgan
Part 1 of a 5 part SQL Tuning workshop, This presentation covers the history of the Oracle Optimizer and explains the first thing the Optimizer does when it receives a SQL statements, which is to transform the SQL statement in order to open up additional access paths.
In this first of a series of presentations, we'll overview the differences between SQL and PL/SQL, and the first steps in optimization, as understanding RULE vs. COST, and how to slash 90% response time in data extractions running in SQL*Plus.
The document discusses adaptive query optimization in Oracle 12c. It begins by describing drawbacks of the optimizer in pre-12c versions, such as insufficient statistics triggering dynamic sampling. It then outlines the key features of adaptive query optimization in 12c, including adaptive/dynamic plans using techniques like adaptive parallel distribution and adaptive joins. It also discusses automatic re-optimization using feedback from initial executions. The document provides illustrations of these techniques using example queries and optimizer statistics.
Balancing the line by using heuristic method based on cpm in salbp –a case studyeSAT Journals
Abstract
In mass production systems, line balancing plays a great role, but this is not easy even if it is a simple straight line. So, in order to
solve these problems Heuristic methods are very much desirable. It is also found that Heuristic methods play a great role in the
formation of metaheuristic methods.Therefore it is very much important to use more efficient heuristic methods. In this research
paper we presents a heuristic method that is based on critical path method for simple assembly line balancing. This research is
mainly concerned with objectives of minimizing the number of workstations, improvement of smoothness index, mean absolute
deviation (MAD) and increasing line efficiency.
Keywords-Heuristic methods,Assembly line balancing problem, Critical path method, Simple assembly line balancing.
Paul Guerin is an OCP Meetup presenter who has worked as a DBA at Origin Energy for 3.5 years. He discusses different types of access paths that a database query optimizer can use to retrieve data from a database, including full table scans, index scans using rowids, unique index scans, range index scans, skip index scans, and full index scans. He provides examples of how the optimizer chooses between these access paths based on factors like indexes available and estimated execution costs.
Oracle Parallel Execution, ein Feature der Enterprise Edition, erlaubt die automatische Verteilung einer SQL-Ausführung auf mehrere Prozesse/CPUs. Dies erfordert andere Herangehensweisen, wenn es um Analyse und Troubleshooting solcher Parallel-Ausführungen geht, insbesondere, da es bei Parallel-Ausführungen zu Problemen kommen kann, die es so bei normaler, serieller Ausführung überhaupt nicht gibt. Dieser Vortrag geht auf diese speziellen Probleme ein und zeigt auf, wie man sie analysieren kann.
Oracle Database 12c introduced several new features for parallel execution plans, including:
1. Performance feedback which allows the optimizer to reoptimize a plan if the initial auto DOP is suboptimal.
2. Hybrid hash distribution, a new distribution method that helps avoid data skewing problems in hash joins.
3. Support for serial operations like non-parallelized functions in parallel plans through techniques like the PX SELECTOR row source.
The document provides an agenda and overview for an executive ERP training program on Oracle's Value Chain Planning (VCP) suite of applications. It includes an agenda with sessions on various VCP modules like Demand Management, Advanced Supply Chain Planning, Real-Time Sales and Operations Planning, and an example using VCP for a distributed power company. Key modules in VCP like Demantra, Advanced Planning Command Center, and Advanced Supply Chain Planning are described in terms of their capabilities for integrated demand and supply planning. [/SUMMARY]
Scheduling of Heterogeneous Tasks in Cloud Computing using Multi Queue (MQ) A...IRJET Journal
This document proposes a Multi Queue (MQ) task scheduling algorithm for heterogeneous tasks in cloud computing. It aims to improve upon the Round Robin and Weighted Round Robin algorithms by overcoming their drawbacks. The MQ algorithm splits tasks and resources into separate queues based on size/length and speed. Small tasks are scheduled on slower resources and large tasks on faster resources. The document compares the performance of MQ to Round Robin and Weighted Round Robin algorithms based on makespan, average resource utilization, and load balancing level using CloudSim simulations. The results show that MQ scheduling performs better than the other algorithms in most cases in terms of these metrics.
Hybrid Task Scheduling Approach using Gravitational and ACO Search AlgorithmIRJET Journal
The document proposes a hybrid task scheduling approach for cloud computing called ACGSA that combines ant colony optimization and gravitational search algorithms. It describes using the Cloudsim simulator to test the performance of ACGSA and comparing it to ant colony optimization. The results show that ACGSA achieves better performance than the basic ant colony approach on relevant parameters like task scheduling time and resource utilization.
Proactive Scheduling in Cloud ComputingjournalBEEI
Autonomic fault aware scheduling is a feature quite important for cloud computing and it is related to adoption of workload variation. In this context, this paper proposes an fault aware pattern matching autonomic scheduling for cloud computing based on autonomic computing concepts. In order to validate the proposed solution, we performed two experiments one with traditional approach and other other with pattern recognition fault aware approach. The results show the effectiveness of the scheme.
Sustainability has become an important topic across many disciplines, and IT is no different. As we are building solutions for the future, we have a responsibility to make them sustainable so that we leave not only great tech solutions but also a habitable planet for future generations
My Experience Using Oracle SQL Plan Baselines 11g/12cNelson Calero
This presentation shows how to use the Oracle database functionality SQL Plan Baselines, with examples from real life usage on production (mostly 11gR2) and how to troubleshoot it.
SQL Plan Baselines is a feature introduced on 11g to manage SQL execution plans to prevent performance regressions. The concepts will be presented, along with examples, and some edge cases.
Five Tips to Get the Most Out of Your IndexingMaria Colgan
This is one of the 15 minute "TED" style talk presented as part of the Database Symposium at the ODTUG Kscope18 conference. In this presentation @SQLMaria provides 5 useful tips for getting the most out of indexes in the Oracle Database
IRJET- Fitness Function as Trust Value using to Efficient Multipath Routi...IRJET Journal
This paper proposes an energy efficient multipath routing protocol for mobile ad hoc networks. The protocol considers transmission power and remaining energy of nodes as energy metrics to select energy efficient paths and extend network lifetime. It is implemented using the NS-2 simulator. Simulation results show that the proposed protocol increases network lifetime and performance compared to the conventional AOMDV routing protocol by reducing energy consumption of mobile nodes. Key contributions are using transmission power control and residual energy calculation to select paths, and modifying the AOMDV route discovery process to include these energy metrics in route selection.
This document discusses Bloom filters and how they are used to optimize hash joins in Oracle databases. It provides examples of how hash joins work in serial and parallel execution plans. Bloom filters allow the cost-based optimizer to "filter early" in hash joins by preventing unnecessary row transportation between parallel query slave sets. The document explains the build and probe phases of hash joins and how Bloom filters can filter rows before they flow through parent execution stages, improving performance.
Svm Classifier Algorithm for Data Stream Mining Using Hive and RIRJET Journal
This document proposes using Hive and R to perform data stream mining on big data. Hive is used to query and analyze large datasets stored in Hadoop. Test and trained datasets are extracted from the data using Hive queries. The Support Vector Machine (SVM) classifier algorithm analyzes the data to produce a statistical report in R, comparing the accuracy of linear and nonlinear models. The proposed method aims to improve data processing speed and ability to analyze large volumes of data as compared to other tools.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.