Oracle Database 12c became available to the public back in 2013, but it took more than one year for the
Oracle customers to start upgrading of their existing databases to this new release. Many customers, in
2016, are still not in the process of migrating to 12c despite the premier support deadline for Oracle
Database 11g has passed in January 2015.
I had the chance to spend the last two years by a customer who decided to embrace the new release
and start the migration to 12c as soon as possible, in order to take the most out of the (many) new
features that this release offers. When the very first production databases have been migrated to 12c,
the users began noticing quite soon that some queries started to take much more time to complete,
some of them were actually several orders of magnitude slower than before. After small investigation, I
understood that most off those queries have been slowed down by the new “Adaptive Features” that
have been introduced in 12c for the opposite reason: increasing performance. This is what this article
is about.
The final part of the SQL Tuning workshop focuses on applying the techniques discussed in the previous sections to help diagnose and correct a number of problematic SQL statements and shows how you can use SQL Plan Management or a SQL Patch to influence an execution plan.
Five Tips to Get the Most Out of Your IndexingMaria Colgan
This is one of the 15 minute "TED" style talk presented as part of the Database Symposium at the ODTUG Kscope18 conference. In this presentation @SQLMaria provides 5 useful tips for getting the most out of indexes in the Oracle Database
Six Sigma Mechanical Tolerance Analysis 1David Panek
David A. Panek has 18 years of experience in cost engineering. He has expertise in tolerance analysis, Monte Carlo techniques, cost estimation, and neural costing. The document discusses different methods for tolerance analysis including worst case, statistical, and six sigma approaches. It also defines terms related to process variation and discusses measures of process capability like Cp and Cpk. Guidelines are provided for designing optimized tolerances through establishing process standard deviation and computing probabilities to achieve tight assembly gaps.
FAA Flight Landing Distance Forecasting and AnalysisQuynh Tran
The overall goal of this project is to get an ideal model to forecast landing distance based on variables given in the dataset. To be able to come up with a good model that fits the dataset, we need to go through some certain steps to explore, clean, visualize, and analyze values in the dataset.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 Reliability J. García - Verdugo
The document discusses reliability, including definitions of reliability, reliability phases, reliability importance, reliability calculations for serial and parallel systems, and Weibull analysis. Reliability is defined as the probability that a product or system will function as intended without failure over a specified period of time. There are generally three failure phases: infant mortality with early high failure rates, random failures, and wear out with increasing failure rates over time. Reliability is important for customers, cost savings, and competitiveness. Calculations can determine the reliability of serial and parallel systems based on component reliabilities. Weibull analysis involves plotting failure data to determine the appropriate failure distribution.
The document discusses adaptive query optimization in Oracle 12c. It begins by describing drawbacks of the optimizer in pre-12c versions, such as insufficient statistics triggering dynamic sampling. It then outlines the key features of adaptive query optimization in 12c, including adaptive/dynamic plans using techniques like adaptive parallel distribution and adaptive joins. It also discusses automatic re-optimization using feedback from initial executions. The document provides illustrations of these techniques using example queries and optimizer statistics.
This document discusses new features in Oracle Database 12c related to adaptive query optimization and the Oracle optimizer. Key points include:
- Adaptive query optimization allows the optimizer to make run-time adjustments to execution plans and discover additional statistics to improve plans.
- Adaptive plans enable the optimizer to defer plan decisions until execution to choose better performing plans based on actual statistics.
- Adaptive join methods allow switching between join algorithms like nested loops and hash joins based on actual row counts.
- Adaptive parallel distribution methods can switch between distribution techniques like broadcast and hash based on actual row counts.
- Adaptive bitmap index pruning may skip less useful indexes to reduce processing costs during query execution.
The document introduces new optimizer and statistics features in Oracle Database 12c, including Adaptive Query Optimization which enables run-time adjustments to execution plans based on actual statistics collected during execution. Adaptive plans can switch join methods like changing from a nested loops join to a hash join. Adaptive parallel distribution methods can defer choosing a distribution method until execution and switch between hash and broadcast based on row counts. Existing functionality like SQL Plan Management and statistics collection are also enhanced.
The final part of the SQL Tuning workshop focuses on applying the techniques discussed in the previous sections to help diagnose and correct a number of problematic SQL statements and shows how you can use SQL Plan Management or a SQL Patch to influence an execution plan.
Five Tips to Get the Most Out of Your IndexingMaria Colgan
This is one of the 15 minute "TED" style talk presented as part of the Database Symposium at the ODTUG Kscope18 conference. In this presentation @SQLMaria provides 5 useful tips for getting the most out of indexes in the Oracle Database
Six Sigma Mechanical Tolerance Analysis 1David Panek
David A. Panek has 18 years of experience in cost engineering. He has expertise in tolerance analysis, Monte Carlo techniques, cost estimation, and neural costing. The document discusses different methods for tolerance analysis including worst case, statistical, and six sigma approaches. It also defines terms related to process variation and discusses measures of process capability like Cp and Cpk. Guidelines are provided for designing optimized tolerances through establishing process standard deviation and computing probabilities to achieve tight assembly gaps.
FAA Flight Landing Distance Forecasting and AnalysisQuynh Tran
The overall goal of this project is to get an ideal model to forecast landing distance based on variables given in the dataset. To be able to come up with a good model that fits the dataset, we need to go through some certain steps to explore, clean, visualize, and analyze values in the dataset.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 Reliability J. García - Verdugo
The document discusses reliability, including definitions of reliability, reliability phases, reliability importance, reliability calculations for serial and parallel systems, and Weibull analysis. Reliability is defined as the probability that a product or system will function as intended without failure over a specified period of time. There are generally three failure phases: infant mortality with early high failure rates, random failures, and wear out with increasing failure rates over time. Reliability is important for customers, cost savings, and competitiveness. Calculations can determine the reliability of serial and parallel systems based on component reliabilities. Weibull analysis involves plotting failure data to determine the appropriate failure distribution.
The document discusses adaptive query optimization in Oracle 12c. It begins by describing drawbacks of the optimizer in pre-12c versions, such as insufficient statistics triggering dynamic sampling. It then outlines the key features of adaptive query optimization in 12c, including adaptive/dynamic plans using techniques like adaptive parallel distribution and adaptive joins. It also discusses automatic re-optimization using feedback from initial executions. The document provides illustrations of these techniques using example queries and optimizer statistics.
This document discusses new features in Oracle Database 12c related to adaptive query optimization and the Oracle optimizer. Key points include:
- Adaptive query optimization allows the optimizer to make run-time adjustments to execution plans and discover additional statistics to improve plans.
- Adaptive plans enable the optimizer to defer plan decisions until execution to choose better performing plans based on actual statistics.
- Adaptive join methods allow switching between join algorithms like nested loops and hash joins based on actual row counts.
- Adaptive parallel distribution methods can switch between distribution techniques like broadcast and hash based on actual row counts.
- Adaptive bitmap index pruning may skip less useful indexes to reduce processing costs during query execution.
The document introduces new optimizer and statistics features in Oracle Database 12c, including Adaptive Query Optimization which enables run-time adjustments to execution plans based on actual statistics collected during execution. Adaptive plans can switch join methods like changing from a nested loops join to a hash join. Adaptive parallel distribution methods can defer choosing a distribution method until execution and switch between hash and broadcast based on row counts. Existing functionality like SQL Plan Management and statistics collection are also enhanced.
SQL Performance Solutions: Refactor Mercilessly, Index WiselyEnkitec
The document discusses techniques for refactoring SQL queries to improve performance, including rewriting queries to filter data earlier, correcting improper outer joins, avoiding duplicate predicates and tables, and breaking up OR clauses. It also explains how to test SQL performance by naming queries, collecting statistics, and reviewing execution plans and monitoring reports. The speaker will cover common situations that call for refactoring SQL and how to transform queries using techniques like view merging, filter push-down, and join factorization.
This document provides 9 hints for optimizing Oracle database performance:
1. Take a methodical and empirical approach to tuning by focusing on root causes, measuring performance before and after changes, and avoiding "silver bullets".
2. Design databases and applications with performance in mind from the beginning.
3. Index wisely by only creating useful indexes that improve performance without excessive overhead.
4. Leverage built-in Oracle tools like DBMS_XPLAN and SQL Trace to measure performance.
5. Tune the optimizer by adjusting parameters and statistics to encourage better execution plans.
6. Focus SQL and PL/SQL tuning on problem queries, joins, sorts, and DML statements.
7. Address
This document discusses execution plans in Oracle Database. It begins by explaining what an execution plan is and how it shows the steps needed to execute a SQL statement. It then covers how to generate an execution plan using EXPLAIN PLAN or querying V$SQL_PLAN. The document discusses what the optimizer considers a "good" plan in terms of cost and performance. It also explores key elements of an execution plan like cardinality, access paths, join methods, and join order.
Part1 of SQL Tuning Workshop - Understanding the OptimizerMaria Colgan
Part 1 of a 5 part SQL Tuning workshop, This presentation covers the history of the Oracle Optimizer and explains the first thing the Optimizer does when it receives a SQL statements, which is to transform the SQL statement in order to open up additional access paths.
PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...Dave Stokes
Slow query? Add an index or two! But things are suddenly even slower! Indexes are great tools to speed data lookup but have overhead issues. Histograms don’t have that overhead but may not be suited. And how you lock rows also effects performance. So what do you do to speed up queries smartly?
Smarter Together - Bringing Relational Algebra, Powered by Apache Calcite, in...Julian Hyde
What if Looker saw the queries you just executed and could predict your next query? Could it make those queries faster, by smarter caching, or aggregate navigation? Could it read your past SQL queries and help you write your LookML model? Those are some of the reasons to add relational algebra into Looker’s query engine, and why Looker hired Julian Hyde, author of Apache Calcite, to lead the effort. In this talk about the internals of Looker’s query engine, Julian Hyde will describe how the engine works, how Looker queries are described in Calcite’s relational algebra, and some features that it makes possible.
A talk by Julian Hyde at JOIN 2019 in San Francisco.
In this first of a series of presentations, we'll overview the differences between SQL and PL/SQL, and the first steps in optimization, as understanding RULE vs. COST, and how to slash 90% response time in data extractions running in SQL*Plus.
Informatica interview questions by H2kInfosysH2kInfosys
H2K Infosys provides online IT training and placement services worldwide. It acknowledges that materials used for training may contain proprietary information from other companies, but students are not allowed to use these materials for private gain or sell them. The document also contains sample questions and answers related to Informatica concepts.
CBO choice between Index and Full Scan: the good, the bad and the ugly param...Franck Pachot
Usually, the conclusion comes at the end. But here I will clearly show my goal: I wish I will never see the optimizer_index_cost_adj parameters again. Especially when going to 12c where Adaptive Join can be completely fooled because of it. Choosing between index access and full table scan is a key point when optimizing a query, and historically the CBO came with several ways to influence that choice. But on some system, the workarounds have accumulated one on top of the other – biasing completely the CBO estimations. And we see nested loops on huge number of rows because of those false estimations.
The document summarizes the implementation of a Strategic Surface Route Plan (SSRP) process within a Business Process Management system. The SSRP process currently takes 45 days on average to complete and involves collecting and analyzing shipping data to determine the most cost-effective and timely ways to route cargo shipments. The implementation within the BPM system breaks the SSRP process into 13 steps, with each step automated and various data inputs, calculations, and validations represented as additional subprocesses. This allows the SSRP process to be modeled visually and incrementally improved over time.
The document summarizes the implementation of a Strategic Surface Route Plan (SSRP) process within a Business Process Management system. The SSRP process currently takes 45 days on average to complete and involves collecting and analyzing shipping data to determine the most cost-effective and timely ways to route cargo shipments. The implementation within the BPM system breaks the SSRP process into 13 steps, including requesting and collecting data, calculating costs and velocities, applying business rules to make routing decisions, generating a report, vetting the report with stakeholders, capturing feedback, and publishing the final SSRP. This new digital process aims to reduce the completion time to 30 days and make the process less reliant on individual knowledge.
This presentation features the fundamentals of SQL tunning like SQL Processing, Optimizer and Execution Plan, Accessing Tables, Performance Improvement Consideration Partition Technique. Presented by Alphalogic Inc : https://www.alphalogicinc.com/
The document discusses tuning SQL queries in Oracle databases. It begins by noting that while tools can help, there is no single process for tuning every query as each case depends on factors like the schema design, data distribution and how the optimizer chooses a plan. The document then provides a methodology for investigating and tuning a query with poor performance, including getting the execution plan, checking it visually, and identifying possible causes like stale statistics, missing indexes or inefficient SQL.
This is a paper I wrote at Hotsos where we used Method-R and Trace Data to optimize performance. SQL tuning can be simple if you ask the right questions.
GridSQL is an open source distributed database built on PostgreSQL that allows it to scale horizontally across multiple servers by partitioning and distributing data and queries. It provides significantly improved performance over a single PostgreSQL instance for large datasets and queries by parallelizing processing across nodes. However, it has some limitations compared to PostgreSQL such as lack of support for advanced SQL features, slower transactions, and need for downtime to add nodes.
Antes de migrar de 10g a 11g o 12c, tome en cuenta las siguientes consideraciones. No es tan sencillo como simplemente cambiar de motor de base de datos, se necesita hacer consideraciones a nivel del aplicativo.
Ground Breakers Romania: Explain the explain_planMaria Colgan
This session was delivered as part of the EMEA Ground Breakers tour in Romania, Oct. 2019. The execution plan for a SQL statement can often seem complicated and hard to understand. Determining if the execution plan you are looking at is the best plan you could get or attempting to improve a poorly performing execution plan can be a daunting task even for the most experienced DBA or developer. This session examines the different aspects of an execution plan, from selectivity to parallel execution and explains what information you should be gleaming from the plan and how it affects the execution. It offers insight into what caused the Optimizer to make the decision it did as well as a set of corrective measures that can be used to improve each aspect of the plan.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
This presentation is an INTRODUCTION to intermediate MySQL query optimization for the Audience of PHP World 2017. It covers some of the more intricate features in a cursory overview.
Oracle Fleet Patching and Provisioning Deep Dive Webcast SlidesLudovico Caldara
Oracle Fleet Patching and Provisioning allows users to provision, patch, and upgrade Oracle databases and Grid Infrastructure across many servers from a central location. It uses a repository of gold images and working copies to deploy consistent configurations at scale while minimizing errors. Key features include Oracle home management, provisioning, patching, upgrading, and integration with REST APIs.
Oracle Drivers configuration for High Availability, is it a developer's job?Ludovico Caldara
UCP, GridLink, TAF, AC, TAC, FAN… The configuration of Oracle Drivers for application high availability is not an easy job. The developers often care about the minimal working configuration, while the DBAs are busy with the operations. In this session I will try to demystify application server’s connectivity to the database and give a direction toward the highest availability, using Real Application Clusters and new Oracle features like TAC and CMAN TDM.
More Related Content
Similar to ADAPTIVE FEATURES OR: HOW I LEARNED TO STOP WORRYING AND TROUBLESHOOT THE BOMB
SQL Performance Solutions: Refactor Mercilessly, Index WiselyEnkitec
The document discusses techniques for refactoring SQL queries to improve performance, including rewriting queries to filter data earlier, correcting improper outer joins, avoiding duplicate predicates and tables, and breaking up OR clauses. It also explains how to test SQL performance by naming queries, collecting statistics, and reviewing execution plans and monitoring reports. The speaker will cover common situations that call for refactoring SQL and how to transform queries using techniques like view merging, filter push-down, and join factorization.
This document provides 9 hints for optimizing Oracle database performance:
1. Take a methodical and empirical approach to tuning by focusing on root causes, measuring performance before and after changes, and avoiding "silver bullets".
2. Design databases and applications with performance in mind from the beginning.
3. Index wisely by only creating useful indexes that improve performance without excessive overhead.
4. Leverage built-in Oracle tools like DBMS_XPLAN and SQL Trace to measure performance.
5. Tune the optimizer by adjusting parameters and statistics to encourage better execution plans.
6. Focus SQL and PL/SQL tuning on problem queries, joins, sorts, and DML statements.
7. Address
This document discusses execution plans in Oracle Database. It begins by explaining what an execution plan is and how it shows the steps needed to execute a SQL statement. It then covers how to generate an execution plan using EXPLAIN PLAN or querying V$SQL_PLAN. The document discusses what the optimizer considers a "good" plan in terms of cost and performance. It also explores key elements of an execution plan like cardinality, access paths, join methods, and join order.
Part1 of SQL Tuning Workshop - Understanding the OptimizerMaria Colgan
Part 1 of a 5 part SQL Tuning workshop, This presentation covers the history of the Oracle Optimizer and explains the first thing the Optimizer does when it receives a SQL statements, which is to transform the SQL statement in order to open up additional access paths.
PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...Dave Stokes
Slow query? Add an index or two! But things are suddenly even slower! Indexes are great tools to speed data lookup but have overhead issues. Histograms don’t have that overhead but may not be suited. And how you lock rows also effects performance. So what do you do to speed up queries smartly?
Smarter Together - Bringing Relational Algebra, Powered by Apache Calcite, in...Julian Hyde
What if Looker saw the queries you just executed and could predict your next query? Could it make those queries faster, by smarter caching, or aggregate navigation? Could it read your past SQL queries and help you write your LookML model? Those are some of the reasons to add relational algebra into Looker’s query engine, and why Looker hired Julian Hyde, author of Apache Calcite, to lead the effort. In this talk about the internals of Looker’s query engine, Julian Hyde will describe how the engine works, how Looker queries are described in Calcite’s relational algebra, and some features that it makes possible.
A talk by Julian Hyde at JOIN 2019 in San Francisco.
In this first of a series of presentations, we'll overview the differences between SQL and PL/SQL, and the first steps in optimization, as understanding RULE vs. COST, and how to slash 90% response time in data extractions running in SQL*Plus.
Informatica interview questions by H2kInfosysH2kInfosys
H2K Infosys provides online IT training and placement services worldwide. It acknowledges that materials used for training may contain proprietary information from other companies, but students are not allowed to use these materials for private gain or sell them. The document also contains sample questions and answers related to Informatica concepts.
CBO choice between Index and Full Scan: the good, the bad and the ugly param...Franck Pachot
Usually, the conclusion comes at the end. But here I will clearly show my goal: I wish I will never see the optimizer_index_cost_adj parameters again. Especially when going to 12c where Adaptive Join can be completely fooled because of it. Choosing between index access and full table scan is a key point when optimizing a query, and historically the CBO came with several ways to influence that choice. But on some system, the workarounds have accumulated one on top of the other – biasing completely the CBO estimations. And we see nested loops on huge number of rows because of those false estimations.
The document summarizes the implementation of a Strategic Surface Route Plan (SSRP) process within a Business Process Management system. The SSRP process currently takes 45 days on average to complete and involves collecting and analyzing shipping data to determine the most cost-effective and timely ways to route cargo shipments. The implementation within the BPM system breaks the SSRP process into 13 steps, with each step automated and various data inputs, calculations, and validations represented as additional subprocesses. This allows the SSRP process to be modeled visually and incrementally improved over time.
The document summarizes the implementation of a Strategic Surface Route Plan (SSRP) process within a Business Process Management system. The SSRP process currently takes 45 days on average to complete and involves collecting and analyzing shipping data to determine the most cost-effective and timely ways to route cargo shipments. The implementation within the BPM system breaks the SSRP process into 13 steps, including requesting and collecting data, calculating costs and velocities, applying business rules to make routing decisions, generating a report, vetting the report with stakeholders, capturing feedback, and publishing the final SSRP. This new digital process aims to reduce the completion time to 30 days and make the process less reliant on individual knowledge.
This presentation features the fundamentals of SQL tunning like SQL Processing, Optimizer and Execution Plan, Accessing Tables, Performance Improvement Consideration Partition Technique. Presented by Alphalogic Inc : https://www.alphalogicinc.com/
The document discusses tuning SQL queries in Oracle databases. It begins by noting that while tools can help, there is no single process for tuning every query as each case depends on factors like the schema design, data distribution and how the optimizer chooses a plan. The document then provides a methodology for investigating and tuning a query with poor performance, including getting the execution plan, checking it visually, and identifying possible causes like stale statistics, missing indexes or inefficient SQL.
This is a paper I wrote at Hotsos where we used Method-R and Trace Data to optimize performance. SQL tuning can be simple if you ask the right questions.
GridSQL is an open source distributed database built on PostgreSQL that allows it to scale horizontally across multiple servers by partitioning and distributing data and queries. It provides significantly improved performance over a single PostgreSQL instance for large datasets and queries by parallelizing processing across nodes. However, it has some limitations compared to PostgreSQL such as lack of support for advanced SQL features, slower transactions, and need for downtime to add nodes.
Antes de migrar de 10g a 11g o 12c, tome en cuenta las siguientes consideraciones. No es tan sencillo como simplemente cambiar de motor de base de datos, se necesita hacer consideraciones a nivel del aplicativo.
Ground Breakers Romania: Explain the explain_planMaria Colgan
This session was delivered as part of the EMEA Ground Breakers tour in Romania, Oct. 2019. The execution plan for a SQL statement can often seem complicated and hard to understand. Determining if the execution plan you are looking at is the best plan you could get or attempting to improve a poorly performing execution plan can be a daunting task even for the most experienced DBA or developer. This session examines the different aspects of an execution plan, from selectivity to parallel execution and explains what information you should be gleaming from the plan and how it affects the execution. It offers insight into what caused the Optimizer to make the decision it did as well as a set of corrective measures that can be used to improve each aspect of the plan.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
This presentation is an INTRODUCTION to intermediate MySQL query optimization for the Audience of PHP World 2017. It covers some of the more intricate features in a cursory overview.
Similar to ADAPTIVE FEATURES OR: HOW I LEARNED TO STOP WORRYING AND TROUBLESHOOT THE BOMB (20)
Oracle Fleet Patching and Provisioning Deep Dive Webcast SlidesLudovico Caldara
Oracle Fleet Patching and Provisioning allows users to provision, patch, and upgrade Oracle databases and Grid Infrastructure across many servers from a central location. It uses a repository of gold images and working copies to deploy consistent configurations at scale while minimizing errors. Key features include Oracle home management, provisioning, patching, upgrading, and integration with REST APIs.
Oracle Drivers configuration for High Availability, is it a developer's job?Ludovico Caldara
UCP, GridLink, TAF, AC, TAC, FAN… The configuration of Oracle Drivers for application high availability is not an easy job. The developers often care about the minimal working configuration, while the DBAs are busy with the operations. In this session I will try to demystify application server’s connectivity to the database and give a direction toward the highest availability, using Real Application Clusters and new Oracle features like TAC and CMAN TDM.
Oracle Drivers configuration for High AvailabilityLudovico Caldara
This document discusses various techniques for achieving high availability and transparent failover in Oracle databases, including:
- Fast Application Notification (FAN) to notify clients of service relocations and allow sessions to drain gracefully.
- Transparent Application Failover (TAF) which automates reconnects for OCI clients and allows resuming queries after a failure.
- Application Continuity (AC) which records transaction state to allow replaying transactions after a failure, requiring code changes or a connection pool.
- Transparent Application Continuity (TAC) which provides the benefits of AC without requiring code changes for supported drivers.
- Connection managers like Traffic Director which can provide session failover without client changes by managing
... or why Oracle still cares about CMAN and why you should do it too
The Oracle Connection Manager (CMAN) is the Swiss-army knife for database connections. It can be used for security, routing, high availability, single-point of contact... Starting with Oracle 18c, it has been extended with the new Traffic Director Mode (CMAN TDM), that allows transparent failover for applications that do not implement it natively.
In this session I will introduce briefly what CMAN is capable of, how to configure it in a high availability environment, and how the new release achieves a higher protection level.
The document discusses how REST APIs and ORDS can help DBAs adopt more agile practices. It provides examples of how DBAs can expose database operations and metadata via REST endpoints to improve communication and automation between developers and DBAs. This includes endpoints for checking database connectivity, putting applications in maintenance mode, retrieving backup status, creating/deleting restore points, refreshing schemas, and more. The document argues that REST and ORDS can help make DBAs more agile by standardizing their operations and facilitating integration with other tools and services.
How many companies can afford patching regularly their environments?
Patching and maintaining a big amount of Oracle Databases is perceived as complex by most companies. Is there a way to make patching simpler and more controlled? What are the best (and worst) practices for Oracle Home maintenance?
What are the challenges of the new release model that will bring us one new major release per year?
In this session, we will explain some ideas to improve Oracle Home management and database patching, as well as practical examples of automated environments.
Effective Oracle Home Management in the new Release Model eraLudovico Caldara
How many companies can afford patching regularly their environments?
Patching and maintaining a big amount of Oracle Databases is perceived as complex by most companies. Is there a way to make patching simpler and more controlled? What are the best (and worst) practices for Oracle Home maintenance?
What are the challenges of the new release model that will bring us one new major release per year?
In this session, we will explain some ideas to improve Oracle Home management and database patching, as well as practical examples of automated environments, live demos included!
Oracle Active Data Guard 12cR2. Is it the best option?Ludovico Caldara
If you are using Oracle Data Guard for data protection (hint: you should!), you might also want to know more about Oracle Active Data Guard and what makes it essential for even more increased availability and performance. In this session, I will give an overview of many new and old Active Data Guard features such as:
- Rolling Upgrades
- Real-time Query
- Fast Incremental Backup
- Subset Standby
- Multiple Instance Redo Apply
- Advanced topologies (Real-time Cascading Standby, Far Sync Standby, Alternate destinations)
- Automatic Block Repair
- Global Data Services
I will also explain why the ROI of Oracle Database Enterprise Edition can be higher when coupled with Oracle Active Data Guard.
How to bake a Customer Story with With Windows, NVM-e, Data Guard, ACFS Snaps...Ludovico Caldara
This document describes a new solution implemented by Trivadis to address a customer's need to clone databases faster. The previous solution took 2 hours to clone a 300GB database. The new solution leverages Oracle Data Guard, NVM-e, ACFS snapshots, bash scripts, Linux, and Windows with Perl to enable cloning a database within minutes. Key aspects of the new architecture include using ACFS snapshots to quickly copy data, placing components like GRID infrastructure and databases on high-performance NVM-e storage, and automating the cloning process with scripts. This provides faster database clones while avoiding costly additional technologies.
Get the most out of Oracle Data Guard - OOW versionLudovico Caldara
If you use Oracle Data Guard feature just for data protection, you are using less than half of its potential. You already pay for it, so why not getting the most out of it? In this session I will show how you can use Oracle Data Guard capabilities for common tasks such as database cloning, database migration and reporting, with the help of other features included in Oracle Database Enterprise Edition
Get the most out of Oracle Data Guard - POUG versionLudovico Caldara
If you use Oracle Data Guard feature just for data protection, you are using less than half of its potential. You already pay for it, so why not getting the most out of it? In this session I will show how you can use Oracle Data Guard capabilities for common tasks such as database cloning, database migration and reporting, with the help of other features included in Oracle Database Enterprise Edition
Are your Oracle databases highly available? You have deployed Real Application Clusters (RAC), Data Guard, or Failover Clusters and are well protected against server failures? Great – the prerequisites for a highly available environment are given. However, to assure that backend infrastructure failures also remain transparent to the client, an appropriate configuration is a prerequisite.
This lecture will discuss the Oracle technologies that can be used to achieve automatic client failover functionality. What are the advantages, but also the limitations of these technologies?
Adaptive Features or: How I Learned to Stop Worrying and Troubleshoot the Bomb.Ludovico Caldara
Adaptive Dynamic Sampling, Adaptive Execution Plans, SQL Plan Directives: these new features are the new performance troublemakers when migrating databases from 11g to 12c. The optimizer uses them to seek for the perfect execution plan, but does it always succeed? This session will focus on the first steps required to quickly troubleshoot performance issues due to the adaptive features.
DMU is the new tool introduced by Oracle for database conversion to the Unicode character set. Beside introducing briefly the tool, this session will focus on a real database conversion scenario faced by a customer, the problems encountered and the solutions.
Migrating to Oracle Database 12c: 300 DBs in 300 days.Ludovico Caldara
For a customer in Switzerland, we are in process of migrating 400 databases to 12c. We have migrated 300 so far, and we have had good and bad surprises. This session will show a few scenarios that we faced during the upgrade project.
Rapid Home Provisioning is a new feature in Oracle Grid Infrastructure 12c R2 that provides a simplified way to provision and patch Oracle software and databases. It uses a centralized management server and golden images stored on ACFS to deploy pre-packaged and patched Oracle homes to client nodes. Administrators can easily create working copies of golden images, deploy databases from the working copies, and seamlessly patch databases by moving them to a working copy based on a newer patched golden image with a single command.
The document discusses using Oracle ACFS (ASM Cluster File System) as a storage option for Oracle Database datafiles. It provides steps for creating an ACFS volume within an ASM disk group, formatting it, mounting it and confirming the mount. This allows configuring an Oracle database to use the ACFS volume for datafiles, enabling high-availability shared storage across nodes.
ORACLE 12C DATA GUARD: FAR SYNC, REAL-TIME CASCADE STANDBY AND OTHER GOODIESLudovico Caldara
The new release of Oracle Database has come with many new exciting enhancements for the High Availability.
This whitepaper introduces some new Data Guard features. Among various enhancements, special attention will be given to
the new Far Sync Instance and the Real-Time Cascade Standby.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).