The document discusses the top 12 new features of Oracle 12c, as presented by David Yahalom of NAYA Technologies. It covers improved column defaults, increased size limits, improved top-N queries, temporary UNDO, new partitioning features, transaction guard, adaptive execution plans, enhanced statistics, data optimization and information lifecycle management (ILM), row pattern matching, and a 50% discount code for a Oracle performance tuning seminar offered by NAYA Technologies.
The document discusses adaptive query optimization in Oracle 12c. It begins by describing drawbacks of the optimizer in pre-12c versions, such as insufficient statistics triggering dynamic sampling. It then outlines the key features of adaptive query optimization in 12c, including adaptive/dynamic plans using techniques like adaptive parallel distribution and adaptive joins. It also discusses automatic re-optimization using feedback from initial executions. The document provides illustrations of these techniques using example queries and optimizer statistics.
The document discusses adaptive query optimization in Oracle 12c. Key points include:
- In 12c, adaptive plans allow the execution plan to change at runtime based on statistics collected, such as switching from a hash join to a nested loops join.
- During the first execution, a statistics collector is inserted and the plan is changed. SQL plan directives are then created.
- For subsequent executions, the information from the initial execution is used to automatically re-optimize the plan, improving performance over time.
MySQL/MariaDB query optimizer tuning tutorial from Percona Live 2013Sergey Petrunya
The document discusses techniques for identifying and addressing problems with a database query optimizer. It describes old and new tools for catching slow queries, such as the slow query log, SHOW PROCESSLIST, and the Performance Schema. It also provides examples of using these tools to analyze query plans, identify inefficient plans, and determine if optimizer settings or query structure need to be modified to address performance issues.
The document discusses histograms in Oracle databases before and after Oracle 12c. It describes the different types of histograms including frequency histograms, height-balanced histograms, top frequency histograms, and hybrid histograms. It highlights issues with histograms in Oracle 11g and how Oracle 12c introduced new histogram types to improve cardinality estimates.
2012 09 MariaDB Boston Meetup - MariaDB 是 Mysql 的替代者吗YUCHENG HU
MariaDB is a community developed fork of MySQL created by many of the original MySQL developers. It aims to be a drop-in replacement for MySQL that is fully open source. Major versions include 5.1 which added new storage engines, 5.2 which focused on authentication and statistics plugins, and 5.3 which introduced dynamic columns and handler sockets. Future versions will integrate features from MySQL 5.6 such as global transaction IDs and an improved InnoDB engine. MariaDB is supported by Monty Program and SkySQL.
The optimizer trace provides a detailed log of the actions taken by the query optimizer. It traces the major stages of query optimization including join preparation, join optimization, and join execution. During join optimization, it records steps like condition processing, determining table dependencies, estimating rows for plans, considering different execution plans, and choosing the best join order. The trace helps understand why certain query plans are chosen and catch differences in plans that may occur due to factors like database version changes.
The document discusses adaptive query optimization in Oracle 12c. It begins by describing drawbacks of the optimizer in pre-12c versions, such as insufficient statistics triggering dynamic sampling. It then outlines the key features of adaptive query optimization in 12c, including adaptive/dynamic plans using techniques like adaptive parallel distribution and adaptive joins. It also discusses automatic re-optimization using feedback from initial executions. The document provides illustrations of these techniques using example queries and optimizer statistics.
The document discusses adaptive query optimization in Oracle 12c. Key points include:
- In 12c, adaptive plans allow the execution plan to change at runtime based on statistics collected, such as switching from a hash join to a nested loops join.
- During the first execution, a statistics collector is inserted and the plan is changed. SQL plan directives are then created.
- For subsequent executions, the information from the initial execution is used to automatically re-optimize the plan, improving performance over time.
MySQL/MariaDB query optimizer tuning tutorial from Percona Live 2013Sergey Petrunya
The document discusses techniques for identifying and addressing problems with a database query optimizer. It describes old and new tools for catching slow queries, such as the slow query log, SHOW PROCESSLIST, and the Performance Schema. It also provides examples of using these tools to analyze query plans, identify inefficient plans, and determine if optimizer settings or query structure need to be modified to address performance issues.
The document discusses histograms in Oracle databases before and after Oracle 12c. It describes the different types of histograms including frequency histograms, height-balanced histograms, top frequency histograms, and hybrid histograms. It highlights issues with histograms in Oracle 11g and how Oracle 12c introduced new histogram types to improve cardinality estimates.
2012 09 MariaDB Boston Meetup - MariaDB 是 Mysql 的替代者吗YUCHENG HU
MariaDB is a community developed fork of MySQL created by many of the original MySQL developers. It aims to be a drop-in replacement for MySQL that is fully open source. Major versions include 5.1 which added new storage engines, 5.2 which focused on authentication and statistics plugins, and 5.3 which introduced dynamic columns and handler sockets. Future versions will integrate features from MySQL 5.6 such as global transaction IDs and an improved InnoDB engine. MariaDB is supported by Monty Program and SkySQL.
The optimizer trace provides a detailed log of the actions taken by the query optimizer. It traces the major stages of query optimization including join preparation, join optimization, and join execution. During join optimization, it records steps like condition processing, determining table dependencies, estimating rows for plans, considering different execution plans, and choosing the best join order. The trace helps understand why certain query plans are chosen and catch differences in plans that may occur due to factors like database version changes.
Using histograms to provide better query performance in MariaDB. Histograms capture the distribution of values in columns to help the query optimizer select better execution plans. The optimizer needs statistics on data distributions to estimate query costs accurately. Histograms are not enabled by default but can be collected using ANALYZE TABLE with the PERSISTENT option. Making histograms available improves the performance of queries that have selective filters or ordering on non-indexed columns.
The document summarizes new features in the query optimizer in MariaDB 10.4, including:
1) An optimizer trace that provides insight into the query planning process.
2) Using sampling for histogram collection during ANALYZE TABLE to improve performance.
3) Rowid filtering that pushes qualifying conditions into joins to filter out non-matching rows earlier.
4) Updated default settings that make better use of statistics and condition selectivity.
The document discusses data manipulation language (DML) statements in SQL. It describes how to insert rows into a table using INSERT, update rows using UPDATE, and delete rows from a table using DELETE. It also covers transaction control using COMMIT to save changes permanently and ROLLBACK to undo pending changes back to a savepoint.
Oracle Exadata is the equivalent of a F1 car in terms of performance but are you sure your application is driving it at its full potential? A simple "lift&shift" approach to Exadata migration might lose significant opportunities for improvements. This session highlights a few example where making little changes dramatically changed the application performance
The internals of Spark SQL Joins, Dmytro PopovichSigma Software
This document discusses Spark SQL joins and the Spark query planning and execution process. It begins with two examples of joining two DataFrames, with the second example being 10x faster. It then covers how Spark SQL queries are optimized through logical and physical planning. Key stages include generating a logical plan from the SQL/DataFrame, analyzing and optimizing this plan, generating physical plans, and code generation. Join algorithms like broadcast hash join, shuffle hash join, and sort merge join are explained in terms of complexity, requirements, and shuffling. The performance difference in the examples is explained by the physical plans generated, with the faster example using a broadcast hash join to avoid shuffling.
Informix Warehouse Accelerator (IWA) features in version 12.1Keshav Murthy
The document discusses enhancements made to Informix Warehouse Accelerator (IWA) in version 12.10. Key points include:
- IWA now supports operations like creating, deploying, loading, enabling, and disabling data marts on secondary nodes in MACH11 and high availability environments, in addition to the primary/standard server node.
- New procedures like dropPartMart and loadPartMart allow refreshing partitions in a partitioned fact table within a data mart.
- Performance of SQL queries involving UNIONs, derived tables, and DISTINCT aggregates was improved.
- Additional OLAP functions and options like NULLS FIRST/LAST in ORDER BY were added for enhanced analytical querying.
The document provides an overview of Oracle indexes from a conceptual and internal perspective. It discusses B-tree indexes, including their structure, leaf and branch nodes, and how select, update, delete and insert operations are handled internally. It also covers bitmap indexes and their storage format, as well as function-based and reversed indexes.
1. The document discusses two methods for explaining SQL execution plans without executing the query: using the EXPLAIN PLAN statement and the GATHER_PLAN_STATISTICS hint.
2. It explains the components of an execution plan such as operation IDs, costs, and predicate information. Filter operations may validate logic before child operations execute.
3. Displaying execution plan statistics with DBMS_XPLAN after running a query with GATHER_PLAN_STATISTICS shows runtime metrics like number of rows and buffers accessed.
The document summarizes how SQL Plan Directives in Oracle 12c can help address issues caused by cardinality misestimation in the optimizer. It provides an example where the optimizer underestimates the number of rows returned by a query on a table due to not having statistics on correlated columns. In 12c, a SQL Plan Directive is automatically generated after the first execution to capture this misestimation. On subsequent queries, the directive can be used to provide more accurate cardinality estimates through automatic reoptimization or dynamic sampling.
This paper describes the evolution of the Plan table and DBMSX_PLAN in 11g and some of the features that can be used to troubelshoot SQL performance effectively and efficiently.
This document provides an overview of statistics for database developers. It discusses key statistics concepts like cardinality estimation and how statistics are used to estimate the number of rows returned by a query. It also covers important statistics-related topics such as data skew, dynamic sampling, and extended statistics that can impact query optimization. Understanding how the optimizer uses statistics is important for helping the optimizer generate efficient execution plans.
The document discusses Exadata Smart Scan, which offloads SQL processing from the database layer to the storage cell layer. Smart Scan passes only the required data blocks filtered by predicates to the database server instead of all blocks. Testing shows that with Smart Scan, a count query takes 0.05 seconds versus 15.68 seconds without Smart Scan. Analysis of statistics with Smart Scan shows most blocks are filtered at the storage layer with only a small amount of data, about 0.18MB, returned to the database layer.
Oracle b tree index internals - rebuilding the thruthXavier Davias
This document discusses dispelling myths about Oracle B-tree indexes and explaining how they work. It aims to explain how to investigate index internals, how Oracle B-tree indexes are structured and balanced, and when index rebuilds may be appropriate. It provides examples of index structures, headers, entries and updates to prove that indexes are always balanced and efficient without needing rebuilds in most cases.
Oracle 12c New Features For Better PerformanceZohar Elkayam
This document discusses new features in Oracle 12c that improve database performance. It begins with an introduction of the speaker and their company Brillix. The document then covers Oracle Database In-Memory Column Store introduced in 12.1, which allows both row and column format data access. Oracle 12.2 introduced Sharded Database Architecture for horizontal scaling across multiple databases. Additional optimizer changes in 12c such as adaptive query optimization and dynamic statistics are also summarized.
Oracle Database In-Memory introduces a number of new features in the query optimizer. The aim of this presentation is to describe and demonstrate how they work.
Tech Talk: Best Practices for Data ModelingScyllaDB
When we think about database performance, data modeling shouldn't be overlooked; the way data is written and retrieved dictates how fast your system can operate. Because Scylla is a non-relational database, its data model focuses on application queries to build the most efficient data structure. Adapting to a new data modeling mindset can be done pragmatically by understanding new database concepts and how they apply to Scylla.
In this webinar you will learn about:
- Scylla data model and basic CQL concepts
- Primary and Clustering key selection
- Collections and User-Defined Types
- Problem finding techniques
The document summarizes new features in Oracle Database 12c from Oracle 11g that would help a DBA currently using 11g. It lists and briefly describes features such as the READ privilege, temporary undo, online data file move, DDL logging, and many others. The objectives are to make the DBA aware of useful 12c features when working with a 12c database and to discuss each feature at a high level within 90 seconds.
Oracle Database 12c includes over 500 new features. Some key new features include:
- Oracle Database 12c Express (EM Express) which replaces Database Control and has less features than Database Control but does not require Java or an app server.
- New online capabilities like online DDL operations with no DDL locking, online move of partitions with no impact to queries, and online statistics gathering for bulk loads.
- Adaptive SQL Plan Management which allows the optimizer to select a more optimal plan at execution time based on current statistics.
- Multitenant architecture which allows consolidation of multiple databases into one container database with pluggable databases.
Using histograms to provide better query performance in MariaDB. Histograms capture the distribution of values in columns to help the query optimizer select better execution plans. The optimizer needs statistics on data distributions to estimate query costs accurately. Histograms are not enabled by default but can be collected using ANALYZE TABLE with the PERSISTENT option. Making histograms available improves the performance of queries that have selective filters or ordering on non-indexed columns.
The document summarizes new features in the query optimizer in MariaDB 10.4, including:
1) An optimizer trace that provides insight into the query planning process.
2) Using sampling for histogram collection during ANALYZE TABLE to improve performance.
3) Rowid filtering that pushes qualifying conditions into joins to filter out non-matching rows earlier.
4) Updated default settings that make better use of statistics and condition selectivity.
The document discusses data manipulation language (DML) statements in SQL. It describes how to insert rows into a table using INSERT, update rows using UPDATE, and delete rows from a table using DELETE. It also covers transaction control using COMMIT to save changes permanently and ROLLBACK to undo pending changes back to a savepoint.
Oracle Exadata is the equivalent of a F1 car in terms of performance but are you sure your application is driving it at its full potential? A simple "lift&shift" approach to Exadata migration might lose significant opportunities for improvements. This session highlights a few example where making little changes dramatically changed the application performance
The internals of Spark SQL Joins, Dmytro PopovichSigma Software
This document discusses Spark SQL joins and the Spark query planning and execution process. It begins with two examples of joining two DataFrames, with the second example being 10x faster. It then covers how Spark SQL queries are optimized through logical and physical planning. Key stages include generating a logical plan from the SQL/DataFrame, analyzing and optimizing this plan, generating physical plans, and code generation. Join algorithms like broadcast hash join, shuffle hash join, and sort merge join are explained in terms of complexity, requirements, and shuffling. The performance difference in the examples is explained by the physical plans generated, with the faster example using a broadcast hash join to avoid shuffling.
Informix Warehouse Accelerator (IWA) features in version 12.1Keshav Murthy
The document discusses enhancements made to Informix Warehouse Accelerator (IWA) in version 12.10. Key points include:
- IWA now supports operations like creating, deploying, loading, enabling, and disabling data marts on secondary nodes in MACH11 and high availability environments, in addition to the primary/standard server node.
- New procedures like dropPartMart and loadPartMart allow refreshing partitions in a partitioned fact table within a data mart.
- Performance of SQL queries involving UNIONs, derived tables, and DISTINCT aggregates was improved.
- Additional OLAP functions and options like NULLS FIRST/LAST in ORDER BY were added for enhanced analytical querying.
The document provides an overview of Oracle indexes from a conceptual and internal perspective. It discusses B-tree indexes, including their structure, leaf and branch nodes, and how select, update, delete and insert operations are handled internally. It also covers bitmap indexes and their storage format, as well as function-based and reversed indexes.
1. The document discusses two methods for explaining SQL execution plans without executing the query: using the EXPLAIN PLAN statement and the GATHER_PLAN_STATISTICS hint.
2. It explains the components of an execution plan such as operation IDs, costs, and predicate information. Filter operations may validate logic before child operations execute.
3. Displaying execution plan statistics with DBMS_XPLAN after running a query with GATHER_PLAN_STATISTICS shows runtime metrics like number of rows and buffers accessed.
The document summarizes how SQL Plan Directives in Oracle 12c can help address issues caused by cardinality misestimation in the optimizer. It provides an example where the optimizer underestimates the number of rows returned by a query on a table due to not having statistics on correlated columns. In 12c, a SQL Plan Directive is automatically generated after the first execution to capture this misestimation. On subsequent queries, the directive can be used to provide more accurate cardinality estimates through automatic reoptimization or dynamic sampling.
This paper describes the evolution of the Plan table and DBMSX_PLAN in 11g and some of the features that can be used to troubelshoot SQL performance effectively and efficiently.
This document provides an overview of statistics for database developers. It discusses key statistics concepts like cardinality estimation and how statistics are used to estimate the number of rows returned by a query. It also covers important statistics-related topics such as data skew, dynamic sampling, and extended statistics that can impact query optimization. Understanding how the optimizer uses statistics is important for helping the optimizer generate efficient execution plans.
The document discusses Exadata Smart Scan, which offloads SQL processing from the database layer to the storage cell layer. Smart Scan passes only the required data blocks filtered by predicates to the database server instead of all blocks. Testing shows that with Smart Scan, a count query takes 0.05 seconds versus 15.68 seconds without Smart Scan. Analysis of statistics with Smart Scan shows most blocks are filtered at the storage layer with only a small amount of data, about 0.18MB, returned to the database layer.
Oracle b tree index internals - rebuilding the thruthXavier Davias
This document discusses dispelling myths about Oracle B-tree indexes and explaining how they work. It aims to explain how to investigate index internals, how Oracle B-tree indexes are structured and balanced, and when index rebuilds may be appropriate. It provides examples of index structures, headers, entries and updates to prove that indexes are always balanced and efficient without needing rebuilds in most cases.
Oracle 12c New Features For Better PerformanceZohar Elkayam
This document discusses new features in Oracle 12c that improve database performance. It begins with an introduction of the speaker and their company Brillix. The document then covers Oracle Database In-Memory Column Store introduced in 12.1, which allows both row and column format data access. Oracle 12.2 introduced Sharded Database Architecture for horizontal scaling across multiple databases. Additional optimizer changes in 12c such as adaptive query optimization and dynamic statistics are also summarized.
Oracle Database In-Memory introduces a number of new features in the query optimizer. The aim of this presentation is to describe and demonstrate how they work.
Tech Talk: Best Practices for Data ModelingScyllaDB
When we think about database performance, data modeling shouldn't be overlooked; the way data is written and retrieved dictates how fast your system can operate. Because Scylla is a non-relational database, its data model focuses on application queries to build the most efficient data structure. Adapting to a new data modeling mindset can be done pragmatically by understanding new database concepts and how they apply to Scylla.
In this webinar you will learn about:
- Scylla data model and basic CQL concepts
- Primary and Clustering key selection
- Collections and User-Defined Types
- Problem finding techniques
The document summarizes new features in Oracle Database 12c from Oracle 11g that would help a DBA currently using 11g. It lists and briefly describes features such as the READ privilege, temporary undo, online data file move, DDL logging, and many others. The objectives are to make the DBA aware of useful 12c features when working with a 12c database and to discuss each feature at a high level within 90 seconds.
Oracle Database 12c includes over 500 new features. Some key new features include:
- Oracle Database 12c Express (EM Express) which replaces Database Control and has less features than Database Control but does not require Java or an app server.
- New online capabilities like online DDL operations with no DDL locking, online move of partitions with no impact to queries, and online statistics gathering for bulk loads.
- Adaptive SQL Plan Management which allows the optimizer to select a more optimal plan at execution time based on current statistics.
- Multitenant architecture which allows consolidation of multiple databases into one container database with pluggable databases.
This document introduces Spark SQL 1.3.0 and how to optimize efficiency. It discusses the main objects like SQL Context and how to create DataFrames from RDDs, JSON, and perform operations like select, filter, groupBy, join, and save data. It shows how to register DataFrames as tables and write SQL queries. DataFrames also support RDD actions and transformations. The document provides references for learning more about DataFrames and their development direction.
You've seen the basic 2-stage example Spark Programs, and now you're ready to move on to something larger. I'll go over lessons I've learned for writing efficient Spark programs, from design patterns to debugging tips.
The slides are largely just talking points for a live presentation, but hopefully you can still make sense of them for offline viewing as well.
The document discusses various benchmarks that are commonly used to evaluate Semantic Web repositories and their performance handling large amounts of RDF data. Some of the major benchmarks mentioned include the Lehigh University Benchmark (LUBM), Berlin SPARQL Benchmark (BSBM), SP2Bench, Social Network Intelligence Benchmark (SIB), and DBPedia SPARQL Benchmark. The document also provides an overview of different benchmark components and links to resources with performance results from various RDF stores and systems.
Data Science at Scale: Using Apache Spark for Data Science at BitlySarah Guido
Given at Data Day Seattle 2015.
Bitly generates over 9 billion clicks on shortened links a month, as well as over 100 million unique link shortens. Analyzing data of this scale is not without its challenges. At Bitly, we have started adopting Apache Spark as a way to process our data. In this talk, I’ll elaborate on how I use Spark as part of my data science workflow. I’ll cover how Spark fits into our existing architecture, the kind of problems I’m solving with Spark, and the benefits and challenges of using Spark for large-scale data science.
In this talk, we’ll discuss technical designs of support of HBase as a “native” data source to Spark SQL to achieve both query and load performance and scalability: near-precise execution locality of query and loading, fine-tuned partition pruning, predicate pushdown, plan execution through coprocessor, and optimized and fully parallelized bulk loader. Point and range queries on dimensional attributes will benefit particularly well from the techniques. Preliminary test results vs. established SQL-on-HBase technologies will be provided. The speaker will also share the future plan and real-world use cases, particularly in the telecom industry.
The document outlines an agenda for a workshop on Pandas, data wrangling, and data science using Pandas. The agenda includes: an introduction and setup; discussing the data science pipeline and Pandas APIs/namespaces; basic Pandas maneuvers; data wrangling techniques like transformations, aggregations, and joins; hands-on exercises using datasets like Titanic and RecSys-2015; and a Q&A session. The goals are to understand data wrangling with Pandas through interactive examples and hands-on practice with real datasets.
The document outlines an agenda for a conference on Apache Spark and data science, including sessions on Spark's capabilities and direction, using DataFrames in PySpark, linear regression, text analysis, classification, clustering, and recommendation engines using Spark MLlib. Breakout sessions are scheduled between many of the technical sessions to allow for hands-on work and discussion.
Bringing Sequential Analysis to A/B Testing with examples from his work at Optimizely.
These slides are from a talk given at the SF Data Engineering meetup. http://www.meetup.com/SF-Data-Engineering/events/231047195/
Advanced Data Science on Spark-(Reza Zadeh, Stanford)Spark Summit
The document provides an overview of Spark and its machine learning library MLlib. It discusses how Spark uses resilient distributed datasets (RDDs) to perform distributed computing tasks across clusters in a fault-tolerant manner. It summarizes the key capabilities of MLlib, including its support for common machine learning algorithms and how MLlib can be used together with other Spark components like Spark Streaming, GraphX, and SQL. The document also briefly discusses future directions for MLlib, such as tighter integration with DataFrames and new optimization methods.
Exadata has been around since 2008 and the software features are being enhanced each release. This Presentation talks about the 12.1.x.x series of Software updates and some of the things you can now do with Exadata
New Features for Database Administrator of Oracle 12c Database. Here are some of excellent Oracle 12c new features with examples for learning purpose. SQL,Backup and recovery, Database management, Oracle RAC, Oracle ASM included.
What is new on 12c for Backup and Recovery? PresentationFrancisco Alvarez
Francisco Munoz Alvarez is an Oracle ACE Director and president of several Oracle user groups. He has many Oracle certifications and experience beta testing various Oracle products.
The presentation covers new features in Oracle Database 12c for backup and recovery including the multitenant container database, enhancements to RMAN and Data Pump, and changes to privileges for backups. It also discusses pluggable databases, container and PDB backup/restore, multisection backups, active duplicate, and SQL usage in RMAN.
Reactive microservices with play and akkascalaconfjp
This document discusses making microservices reactive using Play and Akka in Scala. It describes how to make a customer microservice resilient to data store failures and elastic to varying workloads. The solution involves clustering the data store using Postgres BDR, deploying the microservices to multiple nodes using ConductR for elastic scaling, and replicating cache updates across nodes using Akka data replication.
Spark Summit East 2015 Keynote -- Databricks CEO Ion StoicaDatabricks
This document discusses Databricks Cloud, a platform for running Apache Spark workloads that aims to accelerate time-to-results from months to days. It provides a unified platform with notebooks, dashboards, and jobs running on Spark clusters managed by Databricks. Key benefits include zero management of clusters, interactive queries and streaming for real-time insights, and the ability to develop models and visualizations in notebooks and deploy them as production jobs or dashboards without code changes. The platform is open source with no vendor lock-in and supports various data sources and third party applications. It is being used by over 3,500 organizations for applications like data preparation, analytics, and machine learning.
Fast and Simplified Streaming, Ad-Hoc and Batch Analytics with FiloDB and Spa...Helena Edelson
O'Reilly Webcast with Myself and Evan Chan on the new SNACK Stack (playoff of SMACK) with FIloDB: Scala, Spark Streaming, Akka, Cassandra, FiloDB and Kafka.
The document discusses the top 12 new features of Oracle 12c, including improved column defaults that allow identity columns, increased size limits for VARCHAR columns up to 32K, improved queries for top-N results using ROW LIMIT clauses, and adaptive execution plans that allow the optimizer to choose alternative execution plans based on statistics gathered during the first execution. Temporary undo segments are also introduced to avoid generating redo for temporary table operations.
DBA Commands and Concepts That Every Developer Should Know - Part 2Alex Zaballa
This document provides a summary of several database administration (DBA) commands and concepts relevant for developers. It discusses topics such as count(1) vs count(*), gathering system statistics, setting the DB_FILE_MULTIBLOCK_READ_COUNT parameter, analyzing tables, explaining plans, monitoring SQL performance, full table scans, pending statistics, restoring statistics history, parallel DML, Flashback Query, DBMS_APPLICATION_INFO, and privileges for reading tables. The document is intended to help developers better understand and work with database configurations and operations.
DBA Commands and Concepts That Every Developer Should Know - Part 2Alex Zaballa
This document provides a summary of several database administration (DBA) commands and concepts relevant for developers. It discusses topics such as count(1) vs count(*), gathering system statistics, setting the DB_FILE_MULTIBLOCK_READ_COUNT parameter, analyzing tables, explaining plans, monitoring SQL performance, full table scans, pending statistics, restoring statistics history, parallel DML, Flashback Query, DBMS_APPLICATION_INFO, schema management, adding columns with defaults, object and system privileges. The document is intended to help developers better understand and work with database concepts.
Narayan Newton presented on recent developments in MySQL. He discussed how MySQL has fragmented into several variants including MariaDB, PerconaDB, and Drizzle. He provided details on improvements in Oracle MySQL 5.5 and 5.6, Percona Server, and MariaDB including new features like virtual and dynamic columns. Newton also covered optimization improvements and clustering options like Percona Cluster, MySQL Cluster, and Drizzle.
SQL Performance Tuning and New Features in Oracle 19cRachelBarker26
What's new in Oracle 19c (and CMiC R12) and the reporting software Jaspersoft Studios. If you are not interested in Jasper go ahead and skip to page 26. Explains how to read an execution plan and what to look for in an optimized execution plan.
Understanding Query Optimization with ‘regular’ and ‘Exadata’ OracleGuatemala User Group
The document discusses query optimization with regular Oracle databases and Exadata databases. It explains what happens when a SQL statement is issued, including parsing, optimization, and execution. It describes what an execution plan is and how it can be generated and displayed. It discusses how operations can be offloaded to storage cells on Exadata and factors the optimizer considers for determining a good execution plan.
How Database Convergence Impacts the Coming Decades of Data ManagementSingleStore
How Database Convergence Impacts the Coming Decades of Data Management by Nikita Shamgunov, CEO and co-founder of MemSQL.
Presented at NYC Database Month in October 2017. NYC Database Month is the largest database meetup in New York, featuring talks from leaders in the technology space. You can learn more at http://www.databasemonth.com.
Time Series data is proliferating with literally every step that we take, just think about things like Fit Bit bracelets that track your every move and financial trading data all of which is timestamped.
Time series data requires high performance reads and writes even with a huge number of data sources. Both speed and scale are integral to success, which makes for a unique challenge for your database.
A time series NoSQL data model requires flexibility to support unstructured, and semi-structured data as well as the ability to write range queries to analyze your time series data. So how can you tackle speed, scale and flexibility all at once?
Join Professional Services Architect Drew Kerrigan and Developer Advocate Matt Brender for a discussion of:
Examples of time series data sets, from IoT to Finance to jet engines
What makes time series queries different from other database queries
How to model your dataset to answer the right questions about your data
How to store, query and analyze a set of time series data points
Learn how a NoSQL database model and Riak TS can help you address the unique challenges of time series data.
Confoo.ca conference talk February 24th 2021 on MySQL new features found in version 8.0 including server and supporting utility updates for those who may have missed some really neat new features
Enhancements that will make your sql database roar sp1 edition sql bits 2017Bob Ward
This document provides information about various SQL Server features and editions. It includes a list of features available in each edition like row-level security, dynamic data masking, and in-memory OLTP. It also includes memory limits, MAXDOP settings, and pushdown capabilities for different editions. The document discusses lightweight query profiling improvements in SQL Server 2016 SP1 and provides details on predicate pushdown indicators in showplans.
Another year goes by, and most likely, another data access framework has been invented. It will claim to be the fastest, smartest way to talk to the database, and just like all those that came before it, it will not be. Because the best database access tool has been there for more than 30 years now, and that is PL/SQL. Although we all sometimes fall prey to the mindset of “Oh look, a shiny new tool, we should start using it," the performance and simplicity of PL/SQL remain unmatched. This session looks at the failings of other data access languages, why even a cursory knowledge of PL/SQL will make you a better developer, and how to get the most out of PL/SQL when it comes to database performance.
Apache Sqoop: A Data Transfer Tool for HadoopCloudera, Inc.
Apache Sqoop is a tool designed for efficiently transferring bulk data between Hadoop and structured datastores such as relational databases. This slide deck aims at familiarizing the user with Sqoop and how to effectively use it in real deployments.
MySQL 8 -- A new beginning : Sunshine PHP/PHP UK (updated)Dave Stokes
MySQL 8 has many new features and this presentation covers the new data dictionary, improved JSON functions, roles, histograms, and much more. Updated after SunshinePHP 2018 after feedback
MySQL 8.0 New Features -- September 27th presentation for Open Source SummitDave Stokes
MySQL 8.0 has many new features that you probably need to know about but don't. Like default security, window functions, CTEs, CATS (not what you think), JSON_TABLE(), and UTF8MB4 support.
This document provides information about new features and improvements in MySQL 8.0. It discusses enhancements to JSON functionality including new functions and indexing support. It also summarizes added functionality for GIS, UUIDs, common table expressions, window functions, and other query optimizations. The document notes that MySQL 8.0 uses utf8mb4 as the default character set for improved Unicode support and performance.
Developers’ mDay u Banjoj Luci - Bogdan Kecman, Oracle – MySQL Server 8.0mCloud
This document provides information about new features and improvements in MySQL 8.0. It discusses enhancements to JSON functionality including new functions and indexing support. It also summarizes added functionality for GIS, Unicode character sets, UUIDs, window functions, common table expressions, and other query optimizations. The document outlines goals of improving performance, manageability, security and standards compliance for MySQL.
Similar to Oracle12 - The Top12 Features by NAYA Technologies (20)
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Build applications with generative AI on Google CloudMárton Kodok
We will explore Vertex AI - Model Garden powered experiences, we are going to learn more about the integration of these generative AI APIs. We are going to see in action what the Gemini family of generative models are for developers to build and deploy AI-driven applications. Vertex AI includes a suite of foundation models, these are referred to as the PaLM and Gemini family of generative ai models, and they come in different versions. We are going to cover how to use via API to: - execute prompts in text and chat - cover multimodal use cases with image prompts. - finetune and distill to improve knowledge domains - run function calls with foundation models to optimize them for specific tasks. At the end of the session, developers will understand how to innovate with generative AI and develop apps using the generative ai industry trends.
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
Oracle12 - The Top12 Features by NAYA Technologies
1. The top 12 new features of Oracle 12c!
David Yahalom,
CTO, NAYA Technologies
www.naya-tech.com
Email:
davidy@naya-tech.co.il
2. About NAYA Technologies
• Founded in 2009, NAYA technologies provides database consulting,
training and Data Platform managed services.
• The company is headquartered in Israel and New York and specializes
in planning, deploying, and managing business critical database
systems for large enterprises and leading startups.
• NAYA is one of the fastest growing boutique consulting companies in
the market with teams that provide clients with the peace of mind they
need when it comes to their critical data and database systems.
2
3. Our Services and Solutions (FOCUS)
SQL Server
Database
NAYA has years of experience in implementing
Data Platform technologies across different industries.
BigData and
NoSQL
High Availability
Training Services
NAYA College
Oracle Database
Business
Intelligence
MySQL and
PostgreSQL
Databases in the
Azure / AWS
Clouds
Database Security
3
Oracle Engineered
Solutions
Data Integration
High Performance
Database Tuning
4. • Oracle RealWorld Performance Tuning!
• A very practical seminar designed to provide its participants
with a simple methodology and a clear understanding of the
Oracle tuning process.
• 1. How to best identify our problematic SQL.
• 2. The most powerful and actually useful tools for performance
tuning
• 3. Discuss real world examples of performance tuning issues
and their solutions!
• 4.We will also get to know some of the best Oracle 12c new
features for better performance.
5. • Oracle RealWorld Performance Tuning!
• > Identifying the high load SQL statements
• GUI performance tools (OEM), AWR report, Oracle Tracing
• > Tools for retrieving execution plans and execution statistics
• Autotrace, DBMS_XPLAN, EXPLAIN PLAN FOR, Developers
Graphical tools
• > Understanding execution plans
• How to read execution plans? – What should we look for to
identify core issues?
• > Affecting execution plans to resolve performance issues
Hints, Optimizer statistics, Optimizer Parameters, re-writing the
SQL and more
6. • Oracle RealWorld Performance Tuning!
• > Execution plan real time statistics – Moving from theory to
actual
• > Using SQL Monitoring and the “Gather plan statistics” hint
(View Actual values of the execution compared to the optimizer
estimated ones)
• > Oracle 12c enhancements to SQL Monitoring
7. • Oracle RealWorld Performance Tuning!
• > Stabilizing a good plan for my query using SQL Plan Baselines
• > Adding a hint to my query without changing the SQL in my
code (Magic?)
• > Generating the Oracle performance reports (AWR, ASH etc)
from developer client tools, and using them efficiently
• > Using the Oracle Result Cache to optimize performance
• > Additional tips and tricks for better performance
8. • Oracle RealWorld Performance Tuning!
• > Adaptive Execution plans.
• > Adaptive Statistics and re-optimizations.
• > Additional selected Oracle 12c new features for better
performance.
• 50% discount of registration price using code:
W2ATNDS
zakf@naya-tech.com
9. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Improved column defaults
SQL> create sequence s;
Sequence created.
SQL> create table my_table
2 ( x int
3 default s.nextval
4 primary key,
5 y varchar2(30)
6 );
Table created.
• > Sequences supported for columns
without a trigger!
10. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Improved column defaults
• > We can now use an IDENTITY type!
•
• > Generates a sequence and associate that
sequence with the table.
create table my_Table
(x int generated as identity
primary key,
y varchar2(30));
11. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Improved column defaults
create table t
(x int generated by default
as identity
(start with 42
increment by 1000 )
primary key,
y varchar2(30))
• > Complex identity values supported
12. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Increased size limits
> VARCHARS can go up to 32K!
Set MAX_STRING_SIZE init.ora parameter to
EXTENDED.
Run @?/rdbms/admin/utl32k.sql
create table t ( x varchar(32767) );
>> Actually stored as LOB
>> In-row <= 4K, out of row > 4K…
13. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Increased size limits
> But now you can use RPAD/LPAD/TRIM !
SQL> insert into my_tab values ( rpad('*',
32000,'*') );
1 row created.
SQL> select length(x) from my_tab;
LENGTH(X)
——————————————
32000
(previously string built-in functions would have been
able to return only 4,000 bytes)
14. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Improved top-N queries
> New Row limiting clause for result set
pagination.
> Support for the ANSI-standard FETCH FIRST/
NEXT and OFFSET
create table t
as select * from all_objects;
create index t_idx on t(owner,object_name);
15. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Improved top-N queries
> Retrieve the first five rows after sorting by
OWNER and OBJECT_NAME
select owner, object_name, object_id
from t
order by owner, object_name
FETCH FIRST 5 ROWS ONLY;
16. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Improved top-N queries
> The optimizer is rewriting the query to use
analytics!
…
——————————————————————————————————————————————————————————————————————————————
| Id |Operation | Name|Rows |Bytes |Cost (%CPU)|Time |
——————————————————————————————————————————————————————————————————————————————
| 0|SELECT STATEMENT | | 5 | 1450 | 7 (0)|00:00:01|
|* 1| VIEW | | 5 | 1450 | 7 (0)|00:00:01|
|* 2| WINDOW NOSORT STOPKEY | | 5 | 180 | 7 (0)|00:00:01|
| 3| TABLE ACCESS BY INDEX ROWID|T |87310 | 3069K| 7 (0)|00:00:01|
| 4| INDEX FULL SCAN |T_IDX| 5 | | 3 (0)|00:00:01|
——————————————————————————————————————————————————————————————————————————————
Predicate Information (identified by operation id):
—————————————————————————————————————————————————————————————————
1 - filter("from$_subquery$_003"."rowlimit_$$_rownumber"<=5)
2 - filter(ROW_NUMBER() OVER ( ORDER BY "OWNER","OBJECT_NAME")<=5)
17. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Improved top-N queries
> To paginate through a result set:
(Get N rows at a time from a specific page in the result set
—add the OFFSET clause).
select owner, object_name, object_id
from t
order by owner, object_name
OFFSET 5 ROWS FETCH NEXT 5 ROWS ONLY;
18. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Temporary UNDO
> Previously:
Temporary tablespace DML
Generates UNDO in the UNDO TBS
(for read consistency)
UNDO TBS changes required REDO for crash
recovery
19. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Temporary UNDO
Temp TBS
Redo logs
Undo TBS
Bulk Load
20. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Temporary UNDO
Temp TBS & Temporary Undo
Redo logs
Undo TBS
Bulk Load
Permanent tables
Operations on temporary tables will
no longer generate redo.
21. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Temporary UNDO
> Can be used with Active DataGuard!
Read-only replicated tables
Read / Write temporary table
(intermediate query results)
Source Database
22. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Temporary UNDO
alter session
set temp_undo_enabled = true;
update my_table set object_name =
lower(object_name);
87310 rows updated.
Statistics
———————————————————————————————
…
0 redo size
…
23. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
New partitioning features
> Move a partition ONLINE!
(non-blocking DDL, allow DML)
alter table test_tbl move partition p1 ONLINE;
24. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Transaction Guard
> For database developers.
> API that returns the outcome of the
last transaction.
> Provide protection for sensitive
transactions that are allowed to only
happen once.
25. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Transaction Guard
> Without:
26. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Transaction Guard
CallableStatement c = conn2.prepareCall(
"declare b1 boolean; b2 boolean; begin"
+"DBMS_APP_CONT.GET_LTXID_OUTCOME(?,b1,"
+"b2); ? := case when B1 then "
+"'COMMITTED' else 'UNCOMMITTED' end; "
+"end;");
27. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Transaction Guard
> With:
28. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Adaptive Execution Plans
> Before Oracle 12c, plans were fixed for the
first execution.
> Unexpected high row counts may make first plan
suboptimal.
> With 12, the Optimizer can now generate
plan + subplans.
> Optimizer picks final plan based on cardinality
during first execution.
> “Changes its mind” in realtime!
29. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Adaptive Execution Plans
30. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Adaptive Execution Plans
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 23 | 4 (0)| 00:00:01 |
| 1 | HASH UNIQUE | | 1 | 23 | 4 (0)| 00:00:01 |
|- * 2 | HASH JOIN SEMI | | 1 | 23 | 4 (0)| 00:00:01 |
| 3 | NESTED LOOPS SEMI | | 1 | 23 | 4 (0)| 00:00:01 |
|- 4 | STATISTICS COLLECTOR | | | | | |
| * 5 | TABLE ACCESS FULL | DEPARTMENTS | 1 | 16 | 3 (0)| 00:00:01 |
| * 6 | TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES | 1 | 7 | 1 (0)| 00:00:01 |
| * 7 | INDEX RANGE SCAN | EMP_DEPARTMENT_IX | 10 | | 0 (0)| 00:00:01 |
|- * 8 | TABLE ACCESS FULL | EMPLOYEES | 1 | 7 | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
Note
-----
- this is an adaptive plan (rows marked '-' are inactive)
> STATISTICS COLLECTOR buffers the rows and able
to switch to HASH JOIN when cardinality becomes
higher than what was estimated.
31. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Adaptive Execution Plans
32. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Adaptive Execution Plans
Rejected!
Accepted!
33. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Adaptive Execution Plans
34. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Enhanced Statistics
> New histograms: Top, Hybrid.
> New Dynamic Sampling:
Dynamic Sampled statistics (now
Dynamic Statistics) can be reused.
If defined at 2 (which is the default) dynamics statistics will be
gathered if at leat one table in the query has no statistics.
If defined to 11 the database will use dynamic statistics
automatically when statistics are missing, statistics are stale,
statistics are insufficient.
35. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Enhanced Statistics
> Automatically compute statistics
during loads (CATS).
36. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Data Optimisation and ILM
> Oracle 12c creates “Heat Maps”
- tracks and marks data at the row and block
level as it goes through life cycle changes.
> Automatic Data Optimization works with the
Heat Map feature and allows us to create
policies.
> Automatic Data Optimization allows you to
create policies for data compression and data
movement, to implement storage tiers.
37. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Data Optimisation and ILM
> Data can be:
Hot: the object is actively in Read/Write.
Warm: the object which is accessed in
reads only
Cold: the object is not participating in
any kind of activity.
38. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Data Optimisation and ILM
SQL> alter session set heat_map=on;
SQL> select * from scott.emp;
EMPNO ENAME JOB MGR HIREDATE SAL
COMM DEPTNO
---------- ---------- --------- ---------- --------- ----------
---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800
20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600
…
39. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Data Optimisation and ILM
select object_name, track_time "Tracking Time",
segment_write "Segment write",
full_scan "Full Scan",
lookup_scan "Lookup Scan"
from DBA_HEAT_MAP_SEG_HISTOGRAM
where object_name='MYOBJECTS'
and owner = 'SCOTT';
OBJECT_NAME
-------------------------------------------------------------
-------------------
Tracking Time Segment write Full Scan Lookup Scan
------------------ -------------- ------------ ------------
MYOBJECTS
09-sep-13 02:40:14 NO YES NO
40. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Data Optimisation and ILM
ALTER TABLE scott.myobjects ILM ADD POLICY ROW
STORE
COMPRESS ADVANCED SEGMENT AFTER 30 DAYS OF NO
MODIFICATION;
41. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Row Pattern Matching
> An extension to the SELECT
statement using MATCH_RECOGNIZE
that allows us to identify patterns
across sequences of rows.
42. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Row Pattern Matching
PATTERN (STRT DOWN+ UP+)
DEFINE
DOWN AS
DOWN.price < PREV(DOWN.price),
UP AS UP.price > PREV(UP.price)
XYZ 13-MAR-15 35 ***********************************
XYZ 14-MAR-15 34 **********************************
XYZ 15-MAR-15 33 *********************************
XYZ 16-MAR-15 34 **********************************
XYZ 17-MAR-15 35 ***********************************
XYZ 18-MAR-15 36 ************************************
XYZ 19-MAR-15 37 *************************************
XYZ 20-MAR-15 36 ************************************
XYZ 21-MAR-15 35 ***********************************
XYZ 22-MAR-15 34 **********************************
XYZ 23-MAR-15 35 ***********************************
XYZ 24-MAR-15 36 ************************************
XYZ 25-MAR-15 37 *************************************
Any record, followed by one or more records in which the price of the stock goes
down, followed by one or more records in which the stock price increases.
43. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
PL/SQL enhancements.
> Define PL/SQL Subprograms in a
SQL Statement.
> Why would a developer want to copy
logic from a PL/SQL function into a
SQL statement?
To improve performance.
> No context switch to the PL/SQL
engine.
44. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Pluggable Databases
A PDB is a self-contained, fully
functional Oracle Database, and
includes its own system, sysaux
and user tablespaces.
45. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Pluggable Databases
> CDB: Similar to a conventional Oracle
database.
> Contains most of the working parts you will be already
familiar with (controlfiles, datafiles, undo, tempfiles, redo
logs etc.).
> Contains the data dictionary for those objects that are
owned by the root container and those that are visible to
all PDBs.
46. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Pluggable Databases
> PDB: Contains information specific to itself.
> Made up of datafiles and tempfiles to handle it's own
objects: includes it's own data dictionary, containing
information about only those objects that are specific to
the PDB.
47. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Pluggable Databases
48. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Pluggable Databases
49. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Pluggable Databases
> Allows databases to be moved easily
> Allows quick patching and upgrading to future
versions.
A PDB can be unplugged from a 12.1 CBD and plugged
into a 12.2 CDB, effectively upgrading it in seconds.
50. www.naya-tech.com | 5 Penn Plaza, 23rd floor Manhattan, New York 10001 +1.212.896.3945
Pluggable Databases
12.1.0.2