RAC Cache Fusion allows Oracle Real Application Clusters instances to share cached data in memory to avoid disk I/O and improve performance. Key aspects of Cache Fusion include global cache services coordinating cached data across instances, maintaining data consistency through modes and roles for cached blocks, and keeping past images of dirty blocks for recovery purposes. Cache blocks can be accessed locally or globally depending on their assigned role and mode.
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...Aaron Shilo
The document provides an overview of Oracle database performance tuning best practices for DBAs and developers. It discusses the connection between SQL tuning and instance tuning, and how tuning both the database and SQL statements is important. It also covers the connection between the database and operating system, how features like data integrity and zero downtime updates are important. The presentation agenda includes topics like identifying bottlenecks, benchmarking, optimization techniques, the cost-based optimizer, indexes, and more.
Flink Forward Berlin 2017: Stefan Richter - A look at Flink's internal data s...Flink Forward
The document discusses Flink's use of internal data structures to efficiently support checkpointing. It describes how Flink uses RocksDB, a log-structured merge tree database, as a backend to support asynchronous and incremental checkpoints. RocksDB allows checkpoints to be taken with low overhead by creating immutable snapshots of the on-disk data structures. It also facilitates incremental checkpoints by efficiently detecting state changes between checkpoints based on the creation and deletion of immutable sorted string tables. The document provides an example to illustrate how RocksDB integrates with the distributed filesystem and job manager to support incremental checkpointing. It also discusses how Flink uses a copy-on-write hash map approach with the heap state backend to support asynchronous checkpoints while detecting state
This presentation discusses SQL plan management (SPM) and SQL quarantine features in Oracle Database 19c. SPM allows the cost-based optimizer to use optimal execution plans and prevent performance regression. SQL quarantine complements SPM by preventing SQL statements from repeatedly using bad execution plans. The presentation covers how SPM and SQL quarantine work, how to configure them, and reasons why SPM may not be used for a SQL statement. It also discusses other plan stability methods like stored outlines, optimizer hints, and SQL profiles.
The document summarizes new features in Oracle Recovery Manager (RMAN) for Oracle 19c and 18c database releases. Key highlights include the ability to grant and revoke RMAN catalog privileges on specific pluggable databases, support for connecting to recovery catalogs when connected to a pluggable database target, and the new DUPLICATE PLUGGABLE DATABASE command for duplicating pluggable databases to existing container databases. The document also discusses duplicating databases to Oracle Cloud and using RMAN backups after migrating databases between platforms.
This document discusses HDFS high availability using Journal Nodes. It describes how Journal Nodes provide a write ahead log to synchronize data between an active and standby NameNode. The architecture involves Journal Nodes durably logging metadata operations to tolerate NameNode failures. An automatic failover process uses ZooKeeper for elections to transition the standby NameNode to an active state when the active NameNode fails.
The document provides an introduction to Oracle Data Guard and high availability concepts. It discusses how Data Guard maintains standby databases to protect primary database data from failures, disasters, and errors. It describes different types of standby databases, including physical and logical standby databases, and how redo logs are applied from the primary database to keep the standbys synchronized. Real-time apply is also introduced, which allows for more up-to-date synchronization between databases with faster failover times.
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...Aaron Shilo
The document provides an overview of Oracle database performance tuning best practices for DBAs and developers. It discusses the connection between SQL tuning and instance tuning, and how tuning both the database and SQL statements is important. It also covers the connection between the database and operating system, how features like data integrity and zero downtime updates are important. The presentation agenda includes topics like identifying bottlenecks, benchmarking, optimization techniques, the cost-based optimizer, indexes, and more.
Flink Forward Berlin 2017: Stefan Richter - A look at Flink's internal data s...Flink Forward
The document discusses Flink's use of internal data structures to efficiently support checkpointing. It describes how Flink uses RocksDB, a log-structured merge tree database, as a backend to support asynchronous and incremental checkpoints. RocksDB allows checkpoints to be taken with low overhead by creating immutable snapshots of the on-disk data structures. It also facilitates incremental checkpoints by efficiently detecting state changes between checkpoints based on the creation and deletion of immutable sorted string tables. The document provides an example to illustrate how RocksDB integrates with the distributed filesystem and job manager to support incremental checkpointing. It also discusses how Flink uses a copy-on-write hash map approach with the heap state backend to support asynchronous checkpoints while detecting state
This presentation discusses SQL plan management (SPM) and SQL quarantine features in Oracle Database 19c. SPM allows the cost-based optimizer to use optimal execution plans and prevent performance regression. SQL quarantine complements SPM by preventing SQL statements from repeatedly using bad execution plans. The presentation covers how SPM and SQL quarantine work, how to configure them, and reasons why SPM may not be used for a SQL statement. It also discusses other plan stability methods like stored outlines, optimizer hints, and SQL profiles.
The document summarizes new features in Oracle Recovery Manager (RMAN) for Oracle 19c and 18c database releases. Key highlights include the ability to grant and revoke RMAN catalog privileges on specific pluggable databases, support for connecting to recovery catalogs when connected to a pluggable database target, and the new DUPLICATE PLUGGABLE DATABASE command for duplicating pluggable databases to existing container databases. The document also discusses duplicating databases to Oracle Cloud and using RMAN backups after migrating databases between platforms.
This document discusses HDFS high availability using Journal Nodes. It describes how Journal Nodes provide a write ahead log to synchronize data between an active and standby NameNode. The architecture involves Journal Nodes durably logging metadata operations to tolerate NameNode failures. An automatic failover process uses ZooKeeper for elections to transition the standby NameNode to an active state when the active NameNode fails.
The document provides an introduction to Oracle Data Guard and high availability concepts. It discusses how Data Guard maintains standby databases to protect primary database data from failures, disasters, and errors. It describes different types of standby databases, including physical and logical standby databases, and how redo logs are applied from the primary database to keep the standbys synchronized. Real-time apply is also introduced, which allows for more up-to-date synchronization between databases with faster failover times.
The document discusses backup and recovery concepts including the purposes of backups for disaster recovery, operational backups, and archiving. It covers backup considerations like retention periods and file sizes. Backup methods can be full, incremental, differential or synthetic. The backup process involves a backup server coordinating with clients and storage nodes. Restore is initiated manually. Common backup topologies are direct-attached, LAN-based, and SAN-based.
Oracle Flex ASM - What’s New and Best Practices by Jim WilliamsMarkus Michalewicz
Oracle Open World (OOW) 2014 Presentation by Jim Williams (Oracle ASM Product Manager) on Oracle Flex ASM - What's New and Best Practices. The presentation provides an overview of enhancements (What's New) in Oracle ASM 12c, especially with respect to Oracle Flex ASM, and provides best practices which can be applied in any environment (Flex or Standard ASM). This presentation has also more background information for some of the configuration recommendations that I made in my "Oracle RAC (12.1.0.2) Operational Best Practices" presentation.
ODA Backup Restore Utility & ODA Rescue Live DiskRuggero Citton
When applying maintenance to Oracle Database Appliance, it's best practice to back up the ODA system environment (local system boot disk). DBAs have procedures to backup and recover the database but it is also important that you are able to backup and recover the environment that runs the database. This is especially useful if you encounter an issue during patching; you can quickly restore the system disk back to the pre-patch state.
Understanding oracle rac internals part 1 - slidesMohamed Farouk
This document discusses Oracle RAC internals and architecture. It provides an overview of the Oracle RAC architecture including software deployment, processes, and resources. It also covers topics like VIPs, networks, listeners, and SCAN in Oracle RAC. Key aspects summarized include the typical Oracle RAC software stack, local and cluster resources, how VIPs and networks are configured, and the role and dependencies of listeners.
The document discusses optimization of Real Application Clusters (RAC) in Oracle 12c. It provides background on the author and outlines common root causes of RAC performance issues such as CPU/memory starvation, network issues, and excessive dynamic remastering. The document then presents golden rules for RAC diagnostics including avoiding focusing only on top wait events, eliminating infrastructure issues, identifying problem instances, examining both send and receive side metrics, and using histograms. Specific techniques are described for analyzing wait events like gc buffer busy.
This document provides a summary of a presentation on Oracle Real Application Clusters (RAC) integration with Exadata, Oracle Data Guard, and In-Memory Database. It discusses how Oracle RAC performance has been optimized on Exadata platforms through features like fast node death detection, cache fusion optimizations, ASM optimizations, and integration with Exadata infrastructure. The presentation agenda indicates it will cover these RAC optimizations as well as integration with Oracle Data Guard and the In-Memory database option.
The document discusses compaction in RocksDB, an embedded key-value storage engine. It describes the two compaction styles in RocksDB: level style compaction and universal style compaction. Level style compaction stores data in multiple levels and performs compactions by merging files from lower to higher levels. Universal style compaction keeps all files in level 0 and performs compactions by merging adjacent files in time order. The document provides details on the compaction process and configuration options for both styles.
This document provides guidance on using Oracle's Exadata Cloud Service (ExaCS) or Exadata Cloud at Customer (ExaCC) to set up disaster recovery for an on-premises database using Oracle Data Guard or Active Data Guard. It outlines the key benefits of a hybrid cloud/on-premises configuration and provides a 10-step process for implementing this along with considerations for security, networking, and ongoing management after deployment. The document is intended to help technical audiences set up a cloud-based standby database for disaster recovery that follows Oracle Maximum Availability Architecture best practices.
Cloud dw benchmark using tpd-ds( Snowflake vs Redshift vs EMR Hive )SANG WON PARK
몇년 전부터 Data Architecture의 변화가 빠르게 진행되고 있고,
그 중 Cloud DW는 기존 Data Lake(Hadoop 기반)의 한계(성능, 비용, 운영 등)에 대한 대안으로 주목받으며,
많은 기업들이 이미 도입했거나, 도입을 검토하고 있다.
본 자료는 이러한 Cloud DW에 대해서 개념적으로 이해하고,
시장에 존재하는 다양한 Cloud DW 중에서 기업의 환경에 맞는 제품이 어떤 것인지 성능/비용 관점으로 비교했다.
- 왜기업들은 CloudDW에주목하는가?
- 시장에는어떤 제품들이 있는가?
- 우리Biz환경에서는 어떤 제품을 도입해야 하는가?
- CloudDW솔루션의 성능은?
- 기존DataLake(EMR)대비 성능은?
- 유사CloudDW(snowflake vs redshift) 대비성능은?
앞으로도 Data를 둘러싼 시장은 Cloud DW를 기반으로 ELT, Mata Mesh, Reverse ETL등 새로운 생테계가 급속하게 발전할 것이고,
이를 위한 데이터 엔지니어/데이터 아키텍트 관점의 기술적 검토와 고민이 필요할 것 같다.
https://blog.naver.com/freepsw/222654809552
In recent years, we have seen an overwhelming number of TV commercials that promise that the Cloud can help with many problems, including some family issues. What stands behind the terms “Cloud” and “Cloud Computing,” and what we can actually expect from this phenomenon? A group of students of the Computer Systems Technology department and Dr. T. Malyuta, whom has been working with the Cloud technologies since its early days, will provide an overview of the business and technological aspects of the Cloud.
This document discusses SQL Server 2012 AlwaysOn, a high availability and disaster recovery solution. It provides an overview of AlwaysOn availability groups, which allow for multiple synchronous or asynchronous copies of databases across instances. Key features include readable secondary replicas, automatic instance and database failover, and the ability to perform backups on secondary replicas. The document also demonstrates AlwaysOn configuration and functionality through a virtual machine-based lab environment.
The document describes how to convert a single instance Oracle database to Oracle Real Application Clusters (RAC) using RMAN. The key steps include:
1. Duplicating the single instance database to an auxiliary instance on the RAC nodes using RMAN DUPLICATE.
2. Configuring the RAC-specific initialization parameters and creating the necessary redo logs and undo tablespaces.
3. Starting instances on each RAC node and registering them with the Cluster Ready Services framework.
This document provides an overview of Real Application Clusters (RAC) and Global Cache Services (GCS). It discusses how RAC allows multiple instances to access shared database blocks using GCS. GCS implements cache fusion to ensure only one instance can update a block at a time while allowing multiple instances to have consistent read versions. It describes various GCS operations like current reads, downgrades, and maintaining past images of dirty blocks. Wait events are also presented to show messages exchanged during reads involving other instances.
Understanding Query Optimization with ‘regular’ and ‘Exadata’ OracleGuatemala User Group
The document discusses query optimization with regular Oracle databases and Exadata databases. It explains what happens when a SQL statement is issued, including parsing, optimization, and execution. It describes what an execution plan is and how it can be generated and displayed. It discusses how operations can be offloaded to storage cells on Exadata and factors the optimizer considers for determining a good execution plan.
What to Expect From Oracle database 19cMaria Colgan
The Oracle Database has recently switched to an annual release model. Oracle Database 19c is only the second release in this new model. So what can you expect from the latest version of the Oracle Database? This presentation explains how Oracle Database 19c is really 12.2.0.3 the terminal release of the 12.2 family and the new features you can find in this release.
This document summarizes a presentation on Oracle RAC (Real Application Clusters) internals with a focus on Cache Fusion. The presentation covers:
1. An overview of Cache Fusion and how it allows data to be shared across instances to enable scalability.
2. Dynamic re-mastering which adjusts where data is mastered based on access patterns to reduce messaging.
3. Techniques for handling contention including partitioning, connection pools, and separating redo logs.
4. Benefits of combining Oracle Multitenant and RAC such as aligning PDBs to instances.
5. How Oracle In-Memory Column Store fully integrates with RAC including fault tolerance features.
Exadata is Oracle's database machine that combines database servers with intelligent storage servers. It uses several techniques like column projection, predicate filtering, and storage indexes to drastically reduce the amount of data that needs to be transferred from storage to the database servers. This allows Exadata to process queries much faster than a traditional database system.
This document discusses network considerations for Real Application Clusters (RAC). It describes the different network types used, including public, private, storage, and backup networks. It discusses protocols like TCP and UDP used for different traffic. It also covers concepts like network architecture, layers, MTU, jumbo frames, and tools for monitoring network performance like netstat, ping, and traceroute.
How to Build a Scylla Database Cluster that Fits Your NeedsScyllaDB
Sizing a database cluster makes or breaks your application. Too small and you could sustain spikes in usage and recover from a node loss or an operational slowdown. Too big and your cluster will cost more and waste valuable human resources.
Since different workloads have different requirements, successful sizing of your application should be optimized for both throughput and latency performance. However, in many cases, the requirements for each contradicts each other.
In this webinar, we explain how to remediate the contradicting forces and build a sustainable cluster to meet both performance and resiliency requirements.
Oracle Enterprise Manager Security: A Practitioners GuideCourtney Llamas
East Coast Oracle Users Group 2015 - Oracle Enterprise Manager 12c security framework can be quite overwhelming for the EM administrator. It's often hard to understand how the components interact and how to best leverage them for your organization. Learn how to take advantage of Enterprise Manager roles, groups and named credentials to properly grant permissions and privileges to users. Utilizing EM privileges, we'll show how you can safely grant access to application teams and developers, without the worry of changes being made.
This presentation was given at the LDS Tech SORT Conference 2011 in Salt Lake City. The slides are quite comprehensive covering many topics on MongoDB. Rather than a traditional presentation, this was presented as more of a Q & A session. Topics covered include. Introduction to MongoDB, Use Cases, Schema design, High availability (replication) and Horizontal Scaling (sharding).
The document discusses backup and recovery concepts including the purposes of backups for disaster recovery, operational backups, and archiving. It covers backup considerations like retention periods and file sizes. Backup methods can be full, incremental, differential or synthetic. The backup process involves a backup server coordinating with clients and storage nodes. Restore is initiated manually. Common backup topologies are direct-attached, LAN-based, and SAN-based.
Oracle Flex ASM - What’s New and Best Practices by Jim WilliamsMarkus Michalewicz
Oracle Open World (OOW) 2014 Presentation by Jim Williams (Oracle ASM Product Manager) on Oracle Flex ASM - What's New and Best Practices. The presentation provides an overview of enhancements (What's New) in Oracle ASM 12c, especially with respect to Oracle Flex ASM, and provides best practices which can be applied in any environment (Flex or Standard ASM). This presentation has also more background information for some of the configuration recommendations that I made in my "Oracle RAC (12.1.0.2) Operational Best Practices" presentation.
ODA Backup Restore Utility & ODA Rescue Live DiskRuggero Citton
When applying maintenance to Oracle Database Appliance, it's best practice to back up the ODA system environment (local system boot disk). DBAs have procedures to backup and recover the database but it is also important that you are able to backup and recover the environment that runs the database. This is especially useful if you encounter an issue during patching; you can quickly restore the system disk back to the pre-patch state.
Understanding oracle rac internals part 1 - slidesMohamed Farouk
This document discusses Oracle RAC internals and architecture. It provides an overview of the Oracle RAC architecture including software deployment, processes, and resources. It also covers topics like VIPs, networks, listeners, and SCAN in Oracle RAC. Key aspects summarized include the typical Oracle RAC software stack, local and cluster resources, how VIPs and networks are configured, and the role and dependencies of listeners.
The document discusses optimization of Real Application Clusters (RAC) in Oracle 12c. It provides background on the author and outlines common root causes of RAC performance issues such as CPU/memory starvation, network issues, and excessive dynamic remastering. The document then presents golden rules for RAC diagnostics including avoiding focusing only on top wait events, eliminating infrastructure issues, identifying problem instances, examining both send and receive side metrics, and using histograms. Specific techniques are described for analyzing wait events like gc buffer busy.
This document provides a summary of a presentation on Oracle Real Application Clusters (RAC) integration with Exadata, Oracle Data Guard, and In-Memory Database. It discusses how Oracle RAC performance has been optimized on Exadata platforms through features like fast node death detection, cache fusion optimizations, ASM optimizations, and integration with Exadata infrastructure. The presentation agenda indicates it will cover these RAC optimizations as well as integration with Oracle Data Guard and the In-Memory database option.
The document discusses compaction in RocksDB, an embedded key-value storage engine. It describes the two compaction styles in RocksDB: level style compaction and universal style compaction. Level style compaction stores data in multiple levels and performs compactions by merging files from lower to higher levels. Universal style compaction keeps all files in level 0 and performs compactions by merging adjacent files in time order. The document provides details on the compaction process and configuration options for both styles.
This document provides guidance on using Oracle's Exadata Cloud Service (ExaCS) or Exadata Cloud at Customer (ExaCC) to set up disaster recovery for an on-premises database using Oracle Data Guard or Active Data Guard. It outlines the key benefits of a hybrid cloud/on-premises configuration and provides a 10-step process for implementing this along with considerations for security, networking, and ongoing management after deployment. The document is intended to help technical audiences set up a cloud-based standby database for disaster recovery that follows Oracle Maximum Availability Architecture best practices.
Cloud dw benchmark using tpd-ds( Snowflake vs Redshift vs EMR Hive )SANG WON PARK
몇년 전부터 Data Architecture의 변화가 빠르게 진행되고 있고,
그 중 Cloud DW는 기존 Data Lake(Hadoop 기반)의 한계(성능, 비용, 운영 등)에 대한 대안으로 주목받으며,
많은 기업들이 이미 도입했거나, 도입을 검토하고 있다.
본 자료는 이러한 Cloud DW에 대해서 개념적으로 이해하고,
시장에 존재하는 다양한 Cloud DW 중에서 기업의 환경에 맞는 제품이 어떤 것인지 성능/비용 관점으로 비교했다.
- 왜기업들은 CloudDW에주목하는가?
- 시장에는어떤 제품들이 있는가?
- 우리Biz환경에서는 어떤 제품을 도입해야 하는가?
- CloudDW솔루션의 성능은?
- 기존DataLake(EMR)대비 성능은?
- 유사CloudDW(snowflake vs redshift) 대비성능은?
앞으로도 Data를 둘러싼 시장은 Cloud DW를 기반으로 ELT, Mata Mesh, Reverse ETL등 새로운 생테계가 급속하게 발전할 것이고,
이를 위한 데이터 엔지니어/데이터 아키텍트 관점의 기술적 검토와 고민이 필요할 것 같다.
https://blog.naver.com/freepsw/222654809552
In recent years, we have seen an overwhelming number of TV commercials that promise that the Cloud can help with many problems, including some family issues. What stands behind the terms “Cloud” and “Cloud Computing,” and what we can actually expect from this phenomenon? A group of students of the Computer Systems Technology department and Dr. T. Malyuta, whom has been working with the Cloud technologies since its early days, will provide an overview of the business and technological aspects of the Cloud.
This document discusses SQL Server 2012 AlwaysOn, a high availability and disaster recovery solution. It provides an overview of AlwaysOn availability groups, which allow for multiple synchronous or asynchronous copies of databases across instances. Key features include readable secondary replicas, automatic instance and database failover, and the ability to perform backups on secondary replicas. The document also demonstrates AlwaysOn configuration and functionality through a virtual machine-based lab environment.
The document describes how to convert a single instance Oracle database to Oracle Real Application Clusters (RAC) using RMAN. The key steps include:
1. Duplicating the single instance database to an auxiliary instance on the RAC nodes using RMAN DUPLICATE.
2. Configuring the RAC-specific initialization parameters and creating the necessary redo logs and undo tablespaces.
3. Starting instances on each RAC node and registering them with the Cluster Ready Services framework.
This document provides an overview of Real Application Clusters (RAC) and Global Cache Services (GCS). It discusses how RAC allows multiple instances to access shared database blocks using GCS. GCS implements cache fusion to ensure only one instance can update a block at a time while allowing multiple instances to have consistent read versions. It describes various GCS operations like current reads, downgrades, and maintaining past images of dirty blocks. Wait events are also presented to show messages exchanged during reads involving other instances.
Understanding Query Optimization with ‘regular’ and ‘Exadata’ OracleGuatemala User Group
The document discusses query optimization with regular Oracle databases and Exadata databases. It explains what happens when a SQL statement is issued, including parsing, optimization, and execution. It describes what an execution plan is and how it can be generated and displayed. It discusses how operations can be offloaded to storage cells on Exadata and factors the optimizer considers for determining a good execution plan.
What to Expect From Oracle database 19cMaria Colgan
The Oracle Database has recently switched to an annual release model. Oracle Database 19c is only the second release in this new model. So what can you expect from the latest version of the Oracle Database? This presentation explains how Oracle Database 19c is really 12.2.0.3 the terminal release of the 12.2 family and the new features you can find in this release.
This document summarizes a presentation on Oracle RAC (Real Application Clusters) internals with a focus on Cache Fusion. The presentation covers:
1. An overview of Cache Fusion and how it allows data to be shared across instances to enable scalability.
2. Dynamic re-mastering which adjusts where data is mastered based on access patterns to reduce messaging.
3. Techniques for handling contention including partitioning, connection pools, and separating redo logs.
4. Benefits of combining Oracle Multitenant and RAC such as aligning PDBs to instances.
5. How Oracle In-Memory Column Store fully integrates with RAC including fault tolerance features.
Exadata is Oracle's database machine that combines database servers with intelligent storage servers. It uses several techniques like column projection, predicate filtering, and storage indexes to drastically reduce the amount of data that needs to be transferred from storage to the database servers. This allows Exadata to process queries much faster than a traditional database system.
This document discusses network considerations for Real Application Clusters (RAC). It describes the different network types used, including public, private, storage, and backup networks. It discusses protocols like TCP and UDP used for different traffic. It also covers concepts like network architecture, layers, MTU, jumbo frames, and tools for monitoring network performance like netstat, ping, and traceroute.
How to Build a Scylla Database Cluster that Fits Your NeedsScyllaDB
Sizing a database cluster makes or breaks your application. Too small and you could sustain spikes in usage and recover from a node loss or an operational slowdown. Too big and your cluster will cost more and waste valuable human resources.
Since different workloads have different requirements, successful sizing of your application should be optimized for both throughput and latency performance. However, in many cases, the requirements for each contradicts each other.
In this webinar, we explain how to remediate the contradicting forces and build a sustainable cluster to meet both performance and resiliency requirements.
Oracle Enterprise Manager Security: A Practitioners GuideCourtney Llamas
East Coast Oracle Users Group 2015 - Oracle Enterprise Manager 12c security framework can be quite overwhelming for the EM administrator. It's often hard to understand how the components interact and how to best leverage them for your organization. Learn how to take advantage of Enterprise Manager roles, groups and named credentials to properly grant permissions and privileges to users. Utilizing EM privileges, we'll show how you can safely grant access to application teams and developers, without the worry of changes being made.
This presentation was given at the LDS Tech SORT Conference 2011 in Salt Lake City. The slides are quite comprehensive covering many topics on MongoDB. Rather than a traditional presentation, this was presented as more of a Q & A session. Topics covered include. Introduction to MongoDB, Use Cases, Schema design, High availability (replication) and Horizontal Scaling (sharding).
This is my deck from Cloud Conference Torino 2013 (http://www.cloudconf.it). I was the post-lunch speaker, so this one is more silly and there was a lot of off-deck riffing, so this is here only for posterity.
I initially planned to speak on cloud-specific stuff, this turned into an intro to MongoDB though.
Aman sharma hyd_12crac High Availability Day 2015aioughydchapter
This document discusses new features in Oracle RAC and ASM in Oracle Database 12c. It introduces Flex Clusters, which use a hub-and-spoke topology to improve scalability over traditional RAC clusters. Leaf nodes run application workloads and connect to hub nodes, which run databases and ASM. Server pools can now manage both hub and leaf nodes to isolate workloads. Other new features include shared Grid Naming Service (GNS) configurations, policy-based cluster administration using server categorization and policies, and Multitenant databases with RAC.
This document provides instructions for converting an existing standard 2-node Oracle RAC environment to use Flex Clusters and Flex ASM in Oracle 12c. It describes Flex Clusters, which introduce Hub and Leaf nodes, and Flex ASM, which allows ASM instances to run on separate physical servers. The instructions show how to check the current cluster and ASM modes, and then use the ASM Configuration Assistant GUI to convert to Flex ASM, selecting the default listener and private interconnect for ASM network traffic.
This document provides an overview of Apache Spark, including what it is, its core components like Resilient Distributed Datasets (RDDs), Spark SQL, MLlib, and how Spark executions work. It then states that the presentation will demonstrate building a Spark application for time series predictive analysis.
Flex Your Database on 12c's Flex ASM and Flex ClusterMaaz Anjum
This document provides an overview of Flex Clusters and Flex ASM in Oracle Database 12c. It defines Flex Clusters as a scalable and dynamic architecture with hub and leaf nodes. Leaf nodes do not require direct access to shared storage. It describes how to configure a cluster as a Flex Cluster and change node roles. It also introduces Flex ASM, which allows ASM to run on fewer nodes while providing failover of client connections.
Running Analytics at the Speed of Your BusinessRedis Labs
The speed at which you can extract insights from your data is increasingly a competitive edge for your business. Data and analytics have to be at lightning fast speeds to seriously impact your user acquisition.
Join this webinar featuring Forrester analyst Noel Yuhanna and Leena Joshi, VP Product Marketing at Redis Labs to learn how you can glean insights faster with new open source data processing frameworks like Spark and Redis.
In this webinar you will learn:
* Why analytics has to run at the real time speed of business
* How this can be achieved with next generation Big Data tools
* How data structures can optimize your hybrid transaction-analytics processing scenarios
http://bit.ly/1BTaXZP – As organizations look for even faster ways to derive value from big data, they are turning to Apache Spark is an in-memory processing framework that offers lightning-fast big data analytics, providing speed, developer productivity, and real-time processing advantages. The Spark software stack includes a core data-processing engine, an interface for interactive querying, Spark Streaming for streaming data analysis, and growing libraries for machine-learning and graph analysis. Spark is quickly establishing itself as a leading environment for doing fast, iterative in-memory and streaming analysis. This talk will give an introduction the Spark stack, explain how Spark has lighting fast results, and how it complements Apache Hadoop. By the end of the session, you’ll come away with a deeper understanding of how you can unlock deeper insights from your data, faster, with Spark.
The presentation in Oracle Technical Carnival China 2016, this is the second presentation about Oracle sharding function that will release in 12.2. In this presentation, described in real case how Oracle construct the sharding table and duplicated table.
Spark Streaming Tips for Devs and Ops by Fran perez y federico fernándezJ On The Beach
During this talk we will see a regular Kafka/Spark Streaming application, going through some of the most common issues and how we fix them. We'll see how to improve our Spark App in two different point of views: Code quality and Spark Tuning. The final goal is to have a robust and resilient Spark Application deployable in a production-like environment.
Policy based cluster management in oracle 12c Anju Garg
Oracle Grid Infrastructure 12c enhances the use of server pools by introducing server attributes e.g. memory, CPU_count etc. which can be associated with each server. Server pools can be configured so that their members belong to a category of servers, which share a particular set of attributes. Moreover, administrators can maintain a library of policies and switch between them as required rather than manually reallocating servers to various server pools based on workload. This paper discusses in detail the new features of policy based cluster management in 12c.
This document discusses leveraging Oracle Integration Cloud Service for integrating Oracle E-Business Suite. It provides an overview of Integration Cloud Service and the E-Business Suite adapter. It demonstrates how the E-Business Suite adapter can be used as an invoke (target) and trigger (source). Example integration scenarios for service requests and order to invoice are also presented. The document concludes with a roadmap for future enhancements to the E-Business Suite adapter and references for additional resources.
The document discusses serverless architecture and function as a service (FaaS). It notes that serverless allows developers to deploy code as independent functions that are triggered by events and only charge when functions run, scaling automatically. Functions have no disk access and are stateless, running in ephemeral containers. Serverless fits well for static websites, data stream analysis, file processing, and actions users directly pay for on demand. The document outlines Amazon's serverless ecosystem and provides an example architecture and use cases. It also discusses benefits like lower costs and easier scaling but notes potential drawbacks around vendor lock-in and cold starts.
The document describes the steps to configure Oracle sharding in an Oracle 12c environment. It includes installing Oracle software on shardcat, shard1, and shard2 nodes, creating an SCAT database, installing the GSM software, configuring the shard catalog, registering the shard nodes, creating a shard group and adding shards, deploying the shards to create databases on shard1 and shard2, verifying the shard configuration, creating a global service, and creating a sample schema and shard table to verify distribution across shards.
The document discusses best practices for collaborating on Oracle Real Application Clusters (RAC) 12c installations. It covers standardizing on clusters for flexibility and high availability, standardizing on Oracle RAC for scalability and online upgrades/patches. The agenda includes installing Oracle Grid Infrastructure 12c, installing Oracle database homes, creating Oracle RAC databases using DBCA, and post-installation configuration steps. Recommendations are provided around storage configurations, server preparations, network interfaces and other installation options.
This document provides an overview of Oracle 12c Sharded Database Management. It defines what sharding is, how it works, and the benefits it provides such as extreme scalability, fault isolation, and cost reduction. It discusses Oracle's implementation of sharding using database partitioning and Global Data Services (GDS). Key concepts covered include shards, chunks, consistent hashing, and how Oracle supports operations across shards through GDS request routing.
Understanding Oracle RAC 12c Internals as presented during Oracle Open World 2013 with Mark Scardina.
This is part two of the Oracle RAC 12c "reindeer series" used for OOW13 Oracle RAC-related presentations.
The document provides an overview of new features in Oracle Database 12c for developers and DBAs. It begins with introductions and background about the presenter, Alex Zaballa. The presentation then covers many new 12c features such as pluggable databases, data redaction, JSON support, and improved availability, security, and manageability capabilities. Code examples and demos are provided to illustrate several of the new features.
This document provides an overview of Oracle 9i Real Application Clusters (RAC) on Linux. It discusses the benefits of RAC such as scalability, high availability, and transparent expansion. Key components of RAC are described including cache fusion, global cache management, and resource coordination. Failure detection and recovery processes are also summarized. The document concludes with information on configuring Oracle 9i RAC and Linux kernel parameters on Linux systems.
The document provides an overview of Oracle 10g Real Applications Cluster (RAC) training, including:
1) The agenda covers RAC architecture, theory, 10g RAC, and administration including initialization parameters, space management, tools, and a CRS installation example.
2) RAC allows multiple Oracle instances running on multiple nodes to share a single physical database for high availability, scalability, and performance. Cache fusion synchronizes caches using global cache management.
3) Clusterware provides group membership, heartbeats, and failure detection/response for RAC clusters. It manages shared storage, network configuration, and node membership.
Oracle Real Application Cluster (RAC) allows multiple instances of an Oracle database to run simultaneously on multiple nodes. It provides high availability, scalability, and transparent application failover. Key components include shared storage, Oracle Clusterware, cache fusion for data synchronization, and Transparent Application Failover for uninterrupted connections.
This document provides an overview of advanced RAC troubleshooting concepts by Riyaj Shamsudeen. It discusses key concepts related to cache coherency, single and multi-block reads and transfers in RAC, buffer changes when modifying data, and common wait events seen in RAC environments like gc cr block 2-way and gc cr block 3-way. The document is intended for experienced Oracle professionals and provides examples and demonstrations of the various RAC concepts discussed.
Version 1 is a 700-strong, values-driven IT organization that aims to prove IT can benefit businesses through service excellence, innovation and improvement. It has bases in the UK and Ireland and provides services across four main sectors including commercial, financial, public and utilities. Real Application Clusters (RAC) allows multiple Oracle instances to access a single database on shared storage, providing increased availability, scalability and performance. Cache fusion allows data blocks to be shared across instances' caches for faster access. Global cache and global enqueue background processes coordinate this caching and concurrency control.
This document provides an introduction to Docker and Openshift including discussions around infrastructure, storage, monitoring, metrics, logs, backup, and security considerations. It describes the recommended infrastructure for a 3 node Openshift cluster including masters, etcd, and nodes. It also discusses strategies for storage, monitoring both internal pod status and external infrastructure metrics, collecting and managing logs, backups, and security features within Openshift like limiting resource usage and isolating projects.
The document discusses several key aspects of processes and memory management in Linux:
1. A process is represented by a task_struct structure that contains information like the process ID, open files, address space, and state.
2. Each process has both a user stack and kernel stack. The kernel stack is fixed size for safety and to prevent fragmentation.
3. Process duplication is done through fork(), vfork(), and clone() system calls. Fork uses copy-on-write to efficiently duplicate the process.
4. Memory allocation for kernel structures like task_struct uses slab allocators to improve performance over the buddy allocator through object caching and reuse.
The document discusses several key aspects of processes and memory management in Linux:
1. A process is represented by a task_struct structure that contains information like the process ID, open files, address space, state, and stack.
2. Processes have both a user stack and a fixed-size kernel stack. Context switches occur when switching between these stacks for system calls or exceptions.
3. The fork() system call duplicates a process by using copy-on-write techniques to efficiently copy resources from the parent process.
4. Memory allocation for kernel objects like task_struct uses slab allocators to improve performance over the buddy allocator through object caching and reducing initialization overhead.
This document provides an overview of Flex Clusters and Flex ASM in Oracle databases. It defines Flex Clusters as having both hub nodes, which have direct access to shared storage similar to standard clusters, and leaf nodes, which do not require direct storage access. It describes how Flex ASM allows Oracle ASM to run in a Flex Cluster environment across multiple ASM instances. It also provides instructions on converting existing clusters and ASM configurations to the Flex model.
The document provides information about finding the location of OCR and voting disks in an Oracle RAC environment. It states that the OCR location can be found in the /etc/oracle/ocr.loc file and the voting disk location can be found using the crsctl query css votedisk command. It also provides information on backing up the OCR and voting disks, such as using dd to backup voting disks and ocrconfig to backup and restore OCR.
This document provides an overview of Real Application Clusters (RAC) for beginners. It defines what RAC is, explains the key components and terminology of RAC including cache fusion, global locking, interconnects, and virtual IPs. It describes how data is stored and replicated across multiple database instances and nodes in a RAC configuration. The document also discusses important RAC concepts such as instance recovery, undo tablespaces, storage layout, TNS entries, and virtual IPs.
Real Application Cluster (RAC) allows multiple computers to simultaneously run Oracle RDBMS while accessing a single database, providing clustering. RAC provides high availability, scalability, and ease of administration by making multiple instances transparent to users. Nodes must have identical environments. Oracle Clusterware manages node additions and removals. Instances from different nodes write to the same physical database. The presentation covers RAC architecture, components, startup sequence, single instance configuration, node eviction, and tips for monitoring and improving the RAC environment.
Blocks is a cool concept and is very much needed for performance improvements and responsiveness. GCD helps run blocks effortlessly by scheduling on a desired queue, priority and lots more.
In this presentation, we introduce liblightnvm, a user space library that manages provisioning and I/O submission for physical flash.
We argue how liblightnvm can benefit I/O-intensive applications by providing predictable latency and reducing device write amplification, thus prolonging the device's endurance. We show how to integrate liblightnvm with RocksDB.
Clonezilla is an open-source disk and partition imaging/cloning application similar to commercial tools like Ghost and Acronis True Image. It can be used to clone hard drives, restore disk images, and deploy images across multiple systems. The presentation discusses Clonezilla features, how it works, related projects like DRBL-Winroll and Cloudboot, and use cases like mass deployment and bare metal recovery. It also provides information on the Clonezilla team and community.
Oracle Clusterware and Private Network Considerations - Practical Performance...Guenadi JILEVSKI
This document discusses Oracle Real Application Clusters (RAC) performance management. It covers RAC fundamentals and infrastructure, analyzing the impact of cache fusion, private interconnect considerations, common problems and symptoms, and diagnostics. The presentation addresses topics like global buffer cache, wait events, session and system statistics, IPC configuration, and network packet processing. It provides advice on tuning applications, the buffer cache, interconnect setup, and avoiding unnecessary parsing or locking to improve RAC performance.
This document summarizes several dynamic cache replication mechanisms: Victim Replication replicates cache lines evicted from the local cache to reduce access latency. Adaptive Selective Replication dynamically adjusts replication based on estimated costs and benefits. Adaptive Probability Replication replicates blocks based on predicted reuse probabilities. Dynamic Reusability-based Replication replicates blocks with high reuse. Locality-Aware Data Replication only replicates high-locality blocks to reduce misses while maintaining low replication overhead. The document provides details on these schemes and compares their approaches to dynamic cache block replication.
Linux Symposium 2009 Slide Suzaki "Effect of readahead and file system block ...Kuniyasu Suzaki
Ext2/3optimizer and readahead optimization can improve the performance of LBCAS (LoopBack Content Addressable Storage). Ext2/3optimizer rearranges file system blocks based on access profiles to reduce the number of small readahead requests. This increases the readahead coverage size, reduces redundant block downloads, and improves disk access locality. Performance analysis shows these optimizations reduce I/O utilization and consumption time in LBCAS during system booting.
RocksDB is an embedded key-value store that is optimized for fast storage. It uses a log-structured merge-tree to organize data on storage. Optimizing RocksDB for open-channel SSDs would allow controlling data placement to exploit flash parallelism and minimize overhead. This could be done by mapping RocksDB files like SSTables and logs to virtual blocks that map to physical flash blocks in a way that considers data access patterns and flash characteristics. This would improve performance by reducing writes and garbage collection.
This document discusses SQL Azure and Windows Azure Storage. SQL Azure allows storing databases in the cloud with high availability and load balancing. Windows Azure Storage provides durable cloud storage for blobs, disks, tables and queues. It replicates data across multiple datacenters for high availability and scales massively to store large amounts of unstructured and structured data.
Similar to Oracle rac cachefusion - High Availability Day 2015 (20)
This document discusses running Oracle E-Business Suite on Oracle Cloud. It provides an overview of Oracle Cloud offerings including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It outlines reasons for moving E-Business Suite to Oracle Cloud like enabling business agility, lowering costs and risks, and supporting growth. The document also covers solution details such as deployment choices, roadmap for automation, and use cases for transitioning to Oracle Cloud.
Oracle provides a modern cloud infrastructure with bare metal servers, virtual machines, high performance storage, and networking services. Key aspects include availability domains for high availability, non-oversubscribed networking for predictable performance, and direct-attached NVMe storage for high IO workloads. Oracle's infrastructure is designed to provide enterprise-level features like governance, security and reliability while also offering flexibility, pay-as-you-go pricing, and integration with Oracle applications.
Implementing cloud applications redefine your dimensionaioughydchapter
This document discusses implementing cloud applications and Apps Associates' approach. It covers the benefits of cloud applications like short implementation cycles and reduced costs. It also discusses what businesses expect from cloud solutions like ease of use, availability, and personalized experiences. The document outlines Apps Associates' Happy Customer Approach principles and how they focus on the customer, industry experience, and best practices. It positions Apps Associates as experienced Oracle cloud experts who can help with implementation, integration, and managed services.
The document is a presentation slide deck on Oracle Analytics Cloud. It provides an overview and demo of the product. The presentation agenda includes an overview of platform as a service (PaaS), an introduction to Oracle Analytics Cloud, its features and capabilities, and a demo. Key capabilities discussed include connecting to various data sources, preparing and analyzing data, visualizing insights, predictive modeling, collaborative sharing and embedding analytics applications. The presentation emphasizes that Oracle Analytics Cloud provides a unified platform for managed data discovery.
Oracle Cloud Day(IaaS, PaaS,SaaS) - AIOUG Hyd Chapteraioughydchapter
The document provides information about the Oracle User Group AIOUG, including its mission, vision, board of directors, and growth in 2016. It summarizes AIOUG's activities in 2016, including 40 total events held across various chapters in India. It also provides details about an upcoming Oracle Cloud Day event in Hyderabad in February 2017.
Dg broker & client connectivity - High Availability Day 2015aioughydchapter
The document discusses Oracle Data Guard Broker and managing client connectivity in a Data Guard configuration. Some key points:
- The Data Guard Broker automates configuration and monitoring of Data Guard, allowing management of an entire configuration from a single interface.
- It supports primary and standby databases. Services like redo transport and log apply are managed.
- Database services direct client connections to the correct database instance. A trigger ensures clients connect to the primary or standby as appropriate.
- Role-based services started by the trigger allow applications to fail over automatically to a new primary without code changes, using Fast Application Notification.
Aioug ha day oct2015 goldengate- High Availability Day 2015aioughydchapter
This document provides an overview of Oracle GoldenGate and discusses its key components and topologies. It begins with background information about the presenter and then covers topics such as Oracle GoldenGate's supported platforms, common topologies used with Oracle GoldenGate including unidirectional data integration and high availability, and the benefits it provides such as zero downtime upgrades and live reporting. It also discusses Oracle GoldenGate's components including the extract, replicat, trail files, and pump. Finally, it touches on performance tuning techniques for Oracle GoldenGate including adjusting TCP buffer sizes and using checkpoints.
This document discusses demilitarized zone (DMZ) configurations for Oracle E-Business Suite Release 12. It describes four different DMZ architecture types including pros and cons. It also outlines the key steps to enable a DMZ, such as patching, cloning an external node, updating hierarchy type and node trust levels, configuring load balancers, and removing references to internal nodes. Additionally, it highlights some differences between DMZ configurations in 12.1.x and 12.2.x and provides best practices for DMZ implementation and security.
This document provides an overview of EBR (Edition-Based Redefinition) usage in EBS 12.2 for online patching. It introduces the key concepts of ADOP (Oracle's online patching utility), editions, editioning views, and cross-edition triggers which enable applying patches while the system remains available. The document then describes the different phases of the ADOP cycle (Prepare, Apply, Cutover, Cleanup, Abort) and how EBR works together with ADOP to allow patching with zero downtime. Customization considerations for moving code and files during the patching process are also covered.
Getting optimal performance from oracle e business suiteaioughydchapter
This document discusses various ways to optimize the different tiers of Oracle E-Business Suite applications for better performance. It recommends staying current on application patches and upgrades, using optimal logging settings, optimizing workflow and forms processes, and tuning the JVM processes to reduce load on the database server and minimize network traffic. Specific techniques covered include purging workflow runtime data, disabling workflow queue retention, defining node affinity, reducing forms transactions, and adjusting JVM heap sizes.
The document discusses best practices for minimizing downtime during an Oracle E-Business Suite Release 12 upgrade. Key recommendations include:
1. Plan platform and database upgrades as separate downtimes before the main EBS upgrade downtime.
2. Prepare by identifying all required patches, tasks, and customizations work. Purge old data and optimize database parameters.
3. Test the full upgrade plan in a pre-production environment to validate assumptions and identify issues prior to production.
The document discusses Oracle's new online patching capabilities for E-Business Suite Release 12.2. With online patching, the E-Business Suite system remains available to users during patching operations, with downtime limited to a brief cutover period. Patches are applied to a copy of the production environment, including separate file system and database editions. This new approach aims to eliminate lengthy outages and allow patches to be applied with minimal disruption to business operations.
This document discusses Oracle query optimizer concepts like selectivity, cardinality, and object statistics. It provides examples of how the optimizer estimates cardinality based on statistics values like number of rows, distinct values, density and nulls. It also shows how index statistics like clustering factor, leaf blocks impact the choice between an index scan or full table scan.
Database and application performance vivek sharmaaioughydchapter
The document provides an overview of database and application design concepts. It discusses the importance of understanding the underlying database, development tools, and application data. Specific concepts covered include the system global area, locking and concurrency, optimizer statistics and transformations, database objects like tables and indexes, and Oracle waits. Examples are provided around query plans, bind peeking, multi-block reads, and optimizer evolution. Testing, inefficient queries, statistics, caching effects, and functions in predicates are identified as potential causes of performance issues.
This document provides an overview of performance tuning and indexing. It discusses indexing concepts like clustering factor and index data structures like B-trees. It also covers indexing strategies like reverse key indexes and the different types of histograms that can be created, including frequency, height-balanced, top frequency and hybrid histograms in 12c. The document concludes with discussing the basic statistics that are automatically collected on tables, columns and indexes to help with query optimization.
This document provides an overview of Oracle Automatic Workload Repository (AWR) and Active Session History (ASH) analytics. It discusses the AWR infrastructure, how AWR collects and stores database performance snapshots, and how Automatic Database Diagnostic Monitor (ADDM) analyzes the snapshots. It also describes how ASH collects real-time database activity samples and enables enhanced monitoring and troubleshooting capabilities in Oracle 12c. The presentation includes examples of AWR and ASH reports and demonstrations of new features in Oracle 12c such as Real-Time ADDM and enhanced ASH Analytics.
The document provides an overview of performance tuning for Oracle databases. It discusses tuning goals such as accessing the least number of blocks and caching blocks in memory. It outlines the tuning process which includes tuning the design, application, memory, I/O, contention and operating system. Common performance issues for OLTP systems like I/O bottlenecks are also covered. Various tools for identifying performance problems are listed.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
2. History of RAC
1977 – ARCnet developed by Data Point
1980 – Digital Equipment Corporation(DEC) release VAX Cluster Product for VAX/VMS ( First
Commercial Launch)
1988 – First Database to support clustering was launched with Oracle Version 6.0 for Digital Vax
operating system on nCUBE machine. Lock Manager by Oracle is not scalable
1989 - Oracle 6.2 gave birth to Oracle Parallel Server (OPS) with Oracle’s DLM( Dynamic Lock
Manager) worked well with Digital VAX’s Clusters.
1990 – Oracle 7.0 started using Vendor Clusterware where almost all UNIX vendors have started
clustering technology.
1997 – Oracle 8 released along with Generic Lock Manager (OLM) integrated with Oracle Code
with an additional layer called Operating System Dependent (OSD)
OLM integrated with Kernel and named as Integrated Distributed Lock Manger (IDLM) in later
versions.
Oracle Real Application Clusters from Oracle 9i used the same IDLM and the story
continuous………
3. RAC - Cache Fusion
Server Node2
RAM
Disk Array 1. User1 queries data
2. User2 queries same
data - via interconnect
with no disc I/O
3. User1 updates a
row of data and
commits
4. User2 wants to update
same block of data –
Database keeps data
concurrency via
interconnect
inter
connect
RAM
Server Node1
4. The Necessity of Global Resources
1008
SGA1 SGA2
1008
SGA1 SGA2
1008
1008
SGA1 SGA2
1008
SGA1 SGA2
1009 1008 1009
Lost
updates!
1 2
34
5. Global Resources Coordination
a
LMON
LMD0
LMSx
DIAG
…
LCK0
CacheGRD Master
GES
GCS
LMON
LMD0
LMSx
DIAG
…
Cache
LCK0
GRD Master
GES
GCS
Node1
Instance1
Noden
Instancen
Cluster
Interconnect
Global
resources
Global Enqueue Services (GES)Global Cache Services (GCS)
Global Resource Directory (GRD)
6. Global Cache Coordination: Example
Node1
Instance1
Node2
Instance2
…
Cache
Cluster
1009
1008
12
3
GCS
4
No disk I/O
LMON
LMD0
LMSx
…
LCK0
Cache 1009
DIAG
LMON
LMD0
LMSx
LCK0
DIAG
Block mastered
by instance one
Which instance
masters the block?
Instance two has
the current version of the block.
7. Write to Disk Coordination: Example
Node1
Instance1
Node2
Instance2
Cache
Cluster
1010
1010
1
3
2
GCS
45
Only one
disk I/O
LMON
LMD0
LMSx
LCK0
DIAG
LMON
LMD0
LMSx
LCK0
DIAG
……
Cache 1009
Need to make room
in my cache.
Who has the current version
of that block?
Instance two owns it.
Instance two, flush the block
to disk.
Block flushed, make room
9. 9
Cache Fusion Architecture
Full Cache Fusion
Cache-to-cache data shipping
Shared cache eliminates slow
I/O
Enhanced IPC
Allows flexible and transparent
deployment
Users
10. 10
Cache Fusion: Inter Instance Block Requests
Readers and writers
accessing instance A
gain access to blocks in
instance B’s buffer
cache
All types of block
contention and access
Coordination by Global
Cache/Enqueue
Services
Read
Request
for Block
Cache A
Read
Write
Write
Lock Status
Block in
Cache B
Read
Read
Write
Write
11. 11
Cache Fusion Details: GES & GCS
Global Enqueue Service (GES)
Co-ordinates the requests of all global enqueue (any non-buffer
cache resources)
Deadlock detection and Timeout of requests
Manages resource caching/cleanup
Global Cache Service (GCS)
Guarantees cache coherency
Manages caching of shared data via Cache Fusion
Minimizes access time to data which is not in local cache and
would otherwise be read from disk or rolled back
Implements fast direct memory access over high-speed
interconnects for all data blocks and types
Uses an efficient and scalable messaging protocol
Maintains block mode for blocks with Global role
Responsible for block transfers between instances
12. 12
Cache Fusion: Global Resource Directory
The data structures associated with global resources
Global Cache Services and Global Enqueue Services maintain
the Resource Directory
Distributed across all instances in a cluster
Responsible for:
Maintaining the mode and role of cached database blocks
Maintaining block copies for recovery purposes (past images)
13. 13
Cache Fusion Details: Instance Processes
Role of LMON:
Check for instance transition
Reconfiguration
Cleaning up of Cached Enqueue Resources
Role of LMD:
Receive and Process GES messages
Deadlock Detection and Request Timeout
Role of LMSn (0-9) – Higher in 11g and 12c
Receive and Process GCS messages
Buffer Cache Operations & Transfers
14. 14
Cache Fusion Details: Resource Modes
3 Resource Modes for global cache resources
(cached database blocks)
S – shared – used for blocks read into cache – any number of instances can
hold blocks in S mode
X – exclusive – used for blocks updated in cache – only 1 instance can have a
block with X mode
N – null – used for blocks not currently in cache
15. 15
Cache Fusion Details: Resource Roles
2 Resource Roles for global cache resources
L – local – block can be manipulated by instance without further global requests
Block can be held in X, S, or Null mode
Block can be served to other instances
G – global – block manipulation needs further instance coordination
Blocks can be dirty on many nodes
Instances can use a global status for consistent read when held in X mode
by another instance
16. 16
Cache Fusion Details: Past Images
Only applicable to blocks with the Global Resource
roles
Copy of dirty block when the block is transferred to
another instance
Used for recovery purposes if necessary
Maintained until it, or later version is written to disk
17. The past image concept was introduced in the RAC version of Oracle 9i to
maintain data integrity. In an Oracle database, a typical data block is not
written to the disk immediately, even after it is dirtied. When the same
dirty data block is requested by another instance for write or read
purposes, an image of the block is created at the owning instance, and
only that block is shipped to the requesting instance. This backup image
of the block is called the past image (PI) and is kept in memory. In the
event of failure, Oracle can reconstruct the current version of the block
by reading PIs. It is also possible to have more than one past image in the
memory depending on how many times the data block was requested in
the dirty stage
Cache Fusion Details: Past Images
18. Buffer States and Locks
• Buffers can be gotten in two states
– Current – when the intention is to modify
• Shared Current – most recent copy. One copy per instance.
Same as disk
• Exclusive Current – only one copy in the entire cluster. No
shared current present
– CR – when the intention is to only select
• Locks facilitate the state enforcement
– XCUR for Exclusive Current
– SCUR for Shared Current
– No locking for CR
18 Wait Events in RAC
19. Mode/Role Local Global
Null : N NL NG
Shared : S SL SG
Exclusive :X XL XG
Local
SL – When an instance has a resource in SL form, it can serve a copy of the block to other
instances.
XL– When an instance has a resource in XL form, it has sole ownership . It has exclusive
lock to modify the block. All changes to the blocks are in its local buffer cache. If another
instance wants the block, the other instance will contact the instance via GCS.
NL – A NL form is used to protect Consistent Read block, If a block held in SL mode and
other instance wants in X mode, the current instance will send the block to the requesting
instance and downgrade its role to NL
20. Mode/Role Local Global
Null : N NL NG
Shared : S SL SG
Exclusive :X XL XG
Global
SG – In SG Form the block is present in one or more instances. An instance can read the
block form disk and serve it to other instances.
XG – In XG form, a block can have one or more PI’s, indicating multiple copies of the block
in several instances' buffer cache. The instance with the XG role has the latest copy of the
block and is the most likely candidate to write to the block to disk. GCS can ask the
instance with the XG role to write the block to disk or to server it to another instance.
NG – After discarding the PI’s when instructed by GCS, the block is kept in the buffer
cache with NG role. This serves only as the CR copy of the block.
21. LOCK MODE DESCRIPTION
NL0 Null Local and No past Images
SL0 Shared Local with no past image
XL0 Exclusive Local with no past image
NG0 Null Global – Instance owns current block image
SG0 Global Shared Lock – Instance owns current image
XG0 Global Exclusive Lock – Instance own current image
NG1 Global Null – Instance Owns the Past Image Block.
SG1 Shared Global – Instance owns past Image
XG1 Global Exclusive Lock – Instance owns Past Image.
There are 3 characters that distinguish lock or block access modes. The first letter
represents the lock mode, the second character represents the lock role, and the third
character (a number) indicates any past images for the lock in the local instance.
22. Node 1
Cluster Coordination
22
Buffer Cache Buffer Cache
DBWR DBWR
LMS LMS
SCN1
DBWR must get a lock on the database block before
writing to the disk. This is called a Block Lock.
Node 2
Database
SCN2
Checkpoint!
Checkpoint!
Courtesy- Arup Nanda
25. Checking for Buffers
How exactly is this “check”
performed?
• By checking for a lock on the block
• The request comes to the Grant
Queue of the block
• GCS checks that no other instance
has any lock
• Instance 1 can read from the disk
• i.e. Instance 1 is granted the lock
25
Block
SID1
SID2
SID3
Grant
Queue
Convert
Queue
SID5
SID6
SID7
Wait Events in RAC
Courtesy- Arup Nanda
26. Master Instance
• Only one instance holds the grant and
convert queues of a specific block
• This instance is called Master Instance of that
block
• Master instance varies for each block
• The memory structure that shows the master
instance of a buffer is called Global Resource
Directory (GRD)
• That is replicated across all instances
• The requesting instance must check the GRD
to find the master instance
• Then make a request to the master instance
for the lock
26
Block
SID1
SID2
SID3
Grant
Queue
Convert
Queue
SID5
SID6
SID7
Courtesy- Arup Nanda
27. Scenario 1
• Session connected to Instance 1 wants to select a block from
the table
• Activities by Instance 1
1. Check its own buffer cache to see if the block exists
1. If it is found, can it just use it?
2. If it not found, can it select from the disk?
2. If not, then check the other instances
• How will it know which copy of the block is the best source?
27
Instance 1 Instance 2
Session
Courtesy- Arup Nanda
28. Node 2Node 1
Cache Fusion
28
Buffer Cache Buffer Cache
SMON SMON
LMS LMS
When node 2 wants a buffer, it sends a message to the other instance. The
message is sent to the LMS (Lock Management Server) of the other
instance. LMS then sends the buffer to the other instance. LMS is also
called Global Cache Server (GCS) and maintains it.
message
buffer
Courtesy- Arup Nanda
29. Grant Scenario 2
1. Check its buffer cache to see if the block exists
2. And the buffer is found. Can Instance1 use it?
Not really. The buffer may be old; it may have been changed
3. LMS of node1 sends message to master of the buffer
3. Master checks the GES and doesn’t sees any lock
4. Instance 1 is granted the global block lock
5. No buffer actually gets transferred
29
30. Grant Scenario 3
• Instance 1 is the master
– Then it doesn’t have to make a request for the grant
• In summary, here are the possible scenarios when Instance1
requests a buffer
– Instance1 is the master; so no more processing is required
– No one has the lock on the buffer, the grant is made by the
master immediately
– Another instance has the buffer in an incompatible mode.
It has to be changed.
30
31. Buffer States and Locks
• Buffers can be gotten in two states
– Current – when the intention is to modify
• Shared Current – most recent copy. One copy per instance.
Same as disk
• Exclusive Current – only one copy in the entire cluster. No
shared current present
– CR – when the intention is to only select
• Locks facilitate the state enforcement
– XCUR for Exclusive Current
– SCUR for Shared Current
– No locking for CR
31
48. Wait Event: gc current block 2 way
DISK
Wait Event -> gc current block 2-way
Instance 1 Instance 2
2 Master Instance sends the current
block via interconnect, keeps a past
image, and grants exclusive lock.
1 Ask for current block and lock
in exclusive mode
Wait Event -> gc current request
Requesting Instance Master Instance
Current
Block
49. DISK
Wait Event -> gc current block 3 - way
Instance 1
Instance 2
2 Master Instance forwards request to the holder
and sends the message to other instances holding
the shared locks to close their locks.
1 Ask for current block and lock in exclusive mode
Wait Event -> gc current request
Requesting Instance
Holding Instance
Instance 3
3 Holding instance sends current block and
transfers exclusive ownership to requestor
and keeps a past image of the block.
Current Block
Wait Event: gc current block3 way
Master Instance
50. Wait Event: gc current block 2 way
DISK
Wait Event -> gc current block 2-way
Instance 1 Instance 2
2 Master Instance has the current
block, makes a CR copy and sends it
via the interconnect, with no lock
granted.
1 Ask for current block and lock in
shared mode
Wait Event -> gc current request
Requesting Instance Master Instance
Current Block
51. DISK
Wait Event -> gc current block 3 - way
Instance 1
Instance 2
2 Master Instance forwards request
to the holder no lock granted.
1 Ask for current block and lock in share mode
Wait Event -> gc current request
Requesting Instance
Holding Instance
Instance 3
3 Holding instance makes a CR copy and
forwards it to the requestor.
Current Block
Wait Event: gc current block3 way
Master Instance
52. Under the Covers
Redo Log Files
Node nNode 2
Data Files and Control Files
Redo Log Files Redo Log Files
Dictionary
Cache
Log buffer
LCK0 LGWR DBW0
SMON PMON
Library
Cache
Global Resource Directory
LMS0
Instance 2
SGA
Instance n
Cluster Private High Speed Network
Buffer Cache
LMON LMD0 DIAG
Dictionary
Cache
Log buffer
LCK0 LGWR DBW0
SMON PMON
Library
Cache
Global Resource Directory
LMS0
Buffer Cache
LMON LMD0 DIAG
Dictionary
Cache
Log buffer
LCK0 LGWR DBW0
SMON PMON
Library
Cache
Global Resource Directory
LMS0
Buffer Cache
LMON LMD0 DIAG
Instance 1
Node 1
SGA SGA
53. Interconnect and IPC processing
Message:~200 bytes
Block: e.g. 8K
LMS
Initiate send and wait
Receive
Process block
Send
Receive
200 bytes/(1 Gb/sec )
8192 bytes/(1 Gb/sec)
Total access time: e.g. ~360 microseconds (UDP over GBE)
Network propagation delay ( “wire time” ) is a minor factor for roundtrip time
( approx.: 6% , vs. 52% in OS and network stack )
54. Block Access Cost
Cost determined by
• Message Propagation Delay
• IPC CPU
• Operating system scheduling
• Block server process load
• Interconnect stability
55. Block Access Latency
• Defined as roundtrip time
• Latency variation (and CPU cost ) correlates
with
• processing time in Oracle and OS kernel
• db_block_size
• interconnect saturation
• load on node ( CPU starvation )
• ~300 microseconds is lowest measured with
UDP over Gigabit Ethernet and 2K blocks
• ~ 120 microseconds is lowest measured with
RDS over Infiniband and 2K blocks
56. Infrastructure: Private Interconnect
• Network between the nodes of a RAC cluster
MUST be private
• Supported links: GbE, IB ( IPoIB: 10.2 )
• Supported transport protocols: UDP, RDS
(10.2.0.3 and above)
• Use multiple or dual-ported NICs for
redundancy and increase bandwidth with NIC
bonding
• Large ( Jumbo ) Frames for GbE recommended
57. Infrastructure: Interconnect Bandwidth
• Bandwidth requirements depend on
– CPU power per cluster node
– Application-driven data access frequency
– Number of nodes and size of the working set
– Data distribution between PQ slaves
• Typical utilization approx. 10-30% in OLTP
– 10000-12000 8K blocks per sec to saturate 1 x Gb
Ethernet ( 75-80% of theoretical bandwidth )
• Multiple NICs generally not required for
performance and scalability
59. Misconfigured or Faulty Interconnect Can Cause:
• Dropped packets/fragments
• Buffer overflows
• Packet reassembly failures or timeouts
• Ethernet Flow control kicks in
• TX/RX errors
“lost blocks” at the RDBMS level, responsible for
64% of escalations
61. “Lost Blocks”: IP Packet Reassembly Failures
netstat –s
Ip:
84884742 total packets received
…
1201 fragments dropped after timeout
…
3384 packet reassembles failed
62. Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time(s)(ms) Time Wait Class
----------------------------------------------------------------------------------------------------
log file sync 286,038 49,872 174 41.7 Commit
gc buffer busy 177,315 29,021 164 24.3 Cluster
gc cr block busy 110,348 5,703 52 4.8 Cluster
gc cr block lost 4,272 4,953 1159 4.1 Cluster
cr request retry 6,316 4,668 739 3.9 Other
Finding a Problem with the
Interconnect or IPC
Should never be here
63. CPU Saturation or Memory Depletion
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time(s)(ms) Time Wait Class
----------------- --------- ------- ---- ----- ----------
db file sequential 1,312,840 21,590 16 21.8 User I/O
read
gc current block 275,004 21,054 77 21.3 Cluster
congested
gc cr grant congested 177,044 13,495 76 13.6 Cluster
gc current block 1,192,113 9,931 8 10.0 Cluster
2-way
gc cr block congested 85,975 8,917 104 9.0 Cluster
“Congested”: LMS could not de-queue messages fast enough
Cause : Long run queues and paging on the cluster nodes
64. Health Check
Look for:
• High impact of “lost blocks” , e.g.
gc cr block lost 1159 ms
• IO capacity saturation , e.g.
gc cr block busy 52 ms
• Overload and memory depletion, e.g
gc current block congested 14 ms
All events with these tags are potential issue, if their % of db time is significant.
Compare with the lowest measured latency
( target , c.f. SESSION HISTORY reports or SESSION HISTOGRAM view )
66. General Principles
• No fundamentally different design and coding
practices for RAC
• Badly tuned SQL and schema will not run
better
• Serializing contention makes applications less
scalable
• Standard SQL and schema tuning solves > 80%
of performance problems
67. Scalability Pitfalls
• Serializing contention on a small set of
data/index blocks
– monotonically increasing key
– frequent updates of small cached tables
– segment without ASSM or Free List Group (FLG)
• Full table scans
• Frequent hard parsing
• Concurrent DDL ( e.g. truncate/drop )
68. Index Block Contention: Optimal Design
• Monotonically increasing sequence
numbers
– Randomize or cache
– Large ORACLE sequence number caches
• Hash or range partitioning
– Local indexes
69. Data Block Contention: Optimal Design
• Small tables with high row density and
frequent updates and reads can become
“globally hot” with serialization e.g.
– Queue tables
– session/job status tables
– last trade lookup tables
• Higher PCTFREE for table reduces # of rows per
block
70. Large Contiguous Scans
• Query Tuning
• Use parallel execution
– Intra- or inter instance parallelism
– Direct reads
– GCS messaging minimal
71. Event Statistics to Drive Analysis
• Global cache (“gc” ) events and statistics
• Indicate that Oracle searches the cache hierarchy to find
data fast
• as “normal” as an IO ( e.g. db file sequential read )
• GC events tagged as “busy” or “congested” consuming
a significant amount of database time should be
investigated
• At first, assume a load or IO problem on one or several of
the cluster nodes
72. Global Cache Event Semantics
All Global Cache Events will follow the following format:
GC …
• CR, current
– Buffer requests and received for read or write
• block, grant
– Received block or grant to read from disk
• 2-way, 3-way
– Immediate response to remote request after N-hops
• busy
– Block or grant was held up because of contention
• congested
– Block or grant was delayed because LMS was busy or could
not get the CPU
73. “Normal” Global Cache Access
Statistics
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time(s) (ms) Time Wait Class
-------------- -------- --------- ---- ---- ----------
CPU time 4,580 65.4
log file sync 276,281 1,501 5 21.4 Commit
log file parallel 298,045 923 3 13.2 System I/O
write
gc current block 605,628 631 1 9.0 Cluster
3-way
gc cr block 3-way 514,218 533 1 7.6 Cluster
Reads from remote cache instead of disk Avg latency is 1 ms or less
74. Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time(s) (ms) Time Wait Class
------------------------------ ------------ -----------
log file sync 286,038 49,872 174 41.7 Commit
gc buffer busy 177,315 29,021 164 24.3 Cluster
gc cr block busy 110,348 5,703 52 4.8 Cluster
“Abnormal” Global Cache Statistics
“busy” indicates contention
Avg time is too high
75. Drill-down: An IO capacity problem
Symptom of Full Table Scans
IO contention
Top 5 Timed Events Avg %Total
wait Call
Event Waits Time(s) (ms) Time Wait Class
---------------- -------- ------- ---- ---- ----------
db file scattered read 3,747,683 368,301 98 33.3 User I/O
gc buffer busy 3,376,228 233,632 69 21.1 Cluster
db file parallel read 1,552,284 225,218 145 20.4 User I/O
gc cr multi block 35,588,800 101,888 3 9.2 Cluster
request
read by other session 1,263,599 82,915 66 7.5 User I/O
76. Drill-down: SQL Statements
“Culprit”: Query that overwhelms IO subsystem on one node
Physical Reads Executions per Exec %Total
-------------- ----------- ------------- ------
182,977,469 1,055 173,438.4 99.3
SELECT SHELL FROM ES_SHELL WHERE MSG_ID = :msg_id ORDER BY
ORDER_NO ASC
The same query reads from the interconnect:
Cluster CWT % of CPU
Wait Time (s) Elapsd Tim Time(s) Executions
------------- ---------- ----------- --------------
341,080.54 31.2 17,495.38 1,055
SELECT SHELL FROM ES_SHELL WHERE MSG_ID = :msg_id ORDER BY
ORDER_NO ASC
77. GC
Tablespace Subobject Obj. Buffer % of
Name Object Name Name Type Busy Capture
---------- -------------------- ---------- ----- ------------ -------
ESSMLTBL ES_SHELL SYS_P537 TABLE 311,966 9.91
ESSMLTBL ES_SHELL SYS_P538 TABLE 277,035 8.80
ESSMLTBL ES_SHELL SYS_P527 TABLE 239,294 7.60
…
Drill-Down: Top Segments
Apart from being the table with the highest IO demand
it was the table with the highest number of block transfers
AND global serialization
79. Diagnostics Flow
• Start with simple validations :
– Private Interconnect used ?
– Lost blocks and failures ?
– Load and load distribution issues ?
• Check avg latencies, busy, congested events and
their significance
• Check OS statistics ( CPU, disk , virtual memory )
• Identify SQL and Segments
MOST OF THE TIME, A PERFORMANCE PROBLEM IS NOT A
RAC PROBLEM
80. Actions
– Interconnect issues must be fixed first
– If IO wait time is dominant , fix IO issues
• At this point, performance may already be good
– Fix “bad” plans
– Fix serialization
– Fix schema