Изначально будут раскрыты базовые причины, которые заставили появиться такой части механизма СУБД, как кэш результатов, и почему в ряде СУБД он есть или отсутствует.
Будут рассмотрены различные варианты кэширования результатов как sql-запросов, так и результатов хранимой в БД бизнес-логики. Произведено сравнение способов кэширования (программируемые вручную кэши, стандартный функционал) и даны рекомендации, когда и в каких случаях данные способы оптимальны, а порой опасны.
Для каждой из рекомендаций будут продемонстрированы как положительные так и отрицательные кейсы из опыта production-эксплуатации реальных систем, где используются разные варианты кэшей
This document discusses JSON storage and querying in Oracle databases. It covers:
- Configuring the database to support JSON with required patches
- Storing JSON in BLOB columns for space savings and avoiding character set conversions
- Indexing approaches for JSON including indexes on specific fields
- Performance of ingesting, retrieving, and searching JSON data, which can be improved through techniques like anchoring queries and using in-memory indexes
- Maintenance best practices like checking JSON constraints and rebuilding indexes periodically
This document discusses storing and working with JSON data in an Oracle database. It covers JSON configuration, storage, ingestion, retrieval, searching, and maintenance. JSON support in Oracle has improved with patches but still has limitations. Proper configuration, indexing strategy, and maintenance are important for performance. While JSON features are available, Oracle may not be ideal for large document stores and JSON support remains a work in progress.
Oracle High Availabiltity for application developersAlexander Tokarev
Oracle High Availability in application development discusses various Oracle technologies for achieving high availability (HA) such as:
1. RAC, DataGuard/Active Data Guard, and GoldenGate or similar replication tools provide redundancy and failover capabilities at the database level.
2. Technologies like Transaction Guard (TG), Application Continuity (AC), Fast Connection Failover (FCF), and Transparent Application Failover (TAF) help applications handle failures with minimal impact through connection management and transaction replay.
3. These technologies require configuration of services, callbacks, and some code changes but can provide seamless failover for applications connecting to Oracle databases in an HA environment.
The document discusses Oracle Database result caching. It provides an overview of database caches including the result cache. It then describes a hand-made result cache implementation for a retailer case study and how it improved performance from 20 minutes to 4 minutes for a report. It also discusses using the Oracle Database result cache explicitly with hints and annotations, how to monitor and manage it using views and packages, limitations, and best practices.
Oracle JSON treatment evolution - from 12.1 to 18 AOUG-2018Alexander Tokarev
The presentation was prepared for Austria Oracle User group 30 years. It tells us a lot of challenges which Oracle developers face with implementing high-load json processing pipelines.
the presentation describes a lot of very technical details about row level security, possibble security breaches in relational databases like Oracle and Postgres. A lot of examples how to protect data is shown.
P9 speed of-light faceted search via oracle in-memory option by alexander tok...Alexander Tokarev
Alexander Tokarev is a 38-year-old database performance architect who has worked with Oracle databases since 2001. He gave a presentation on his experiences using Oracle In-Memory to improve performance for a faceted search project. The initial implementation did not provide significant performance gains due to limitations of Oracle In-Memory. Various changes were made to the table structure and indexing that resulted in a 4x performance improvement without requiring additional software.
The presentation describes various options for implementing row-level security in enterprise applications: database side, application server side, mixed approaches. we consider oracle virtual private database, different encription options and possible security breaches and their mitigation path.
This document discusses JSON storage and querying in Oracle databases. It covers:
- Configuring the database to support JSON with required patches
- Storing JSON in BLOB columns for space savings and avoiding character set conversions
- Indexing approaches for JSON including indexes on specific fields
- Performance of ingesting, retrieving, and searching JSON data, which can be improved through techniques like anchoring queries and using in-memory indexes
- Maintenance best practices like checking JSON constraints and rebuilding indexes periodically
This document discusses storing and working with JSON data in an Oracle database. It covers JSON configuration, storage, ingestion, retrieval, searching, and maintenance. JSON support in Oracle has improved with patches but still has limitations. Proper configuration, indexing strategy, and maintenance are important for performance. While JSON features are available, Oracle may not be ideal for large document stores and JSON support remains a work in progress.
Oracle High Availabiltity for application developersAlexander Tokarev
Oracle High Availability in application development discusses various Oracle technologies for achieving high availability (HA) such as:
1. RAC, DataGuard/Active Data Guard, and GoldenGate or similar replication tools provide redundancy and failover capabilities at the database level.
2. Technologies like Transaction Guard (TG), Application Continuity (AC), Fast Connection Failover (FCF), and Transparent Application Failover (TAF) help applications handle failures with minimal impact through connection management and transaction replay.
3. These technologies require configuration of services, callbacks, and some code changes but can provide seamless failover for applications connecting to Oracle databases in an HA environment.
The document discusses Oracle Database result caching. It provides an overview of database caches including the result cache. It then describes a hand-made result cache implementation for a retailer case study and how it improved performance from 20 minutes to 4 minutes for a report. It also discusses using the Oracle Database result cache explicitly with hints and annotations, how to monitor and manage it using views and packages, limitations, and best practices.
Oracle JSON treatment evolution - from 12.1 to 18 AOUG-2018Alexander Tokarev
The presentation was prepared for Austria Oracle User group 30 years. It tells us a lot of challenges which Oracle developers face with implementing high-load json processing pipelines.
the presentation describes a lot of very technical details about row level security, possibble security breaches in relational databases like Oracle and Postgres. A lot of examples how to protect data is shown.
P9 speed of-light faceted search via oracle in-memory option by alexander tok...Alexander Tokarev
Alexander Tokarev is a 38-year-old database performance architect who has worked with Oracle databases since 2001. He gave a presentation on his experiences using Oracle In-Memory to improve performance for a faceted search project. The initial implementation did not provide significant performance gains due to limitations of Oracle In-Memory. Various changes were made to the table structure and indexing that resulted in a 4x performance improvement without requiring additional software.
The presentation describes various options for implementing row-level security in enterprise applications: database side, application server side, mixed approaches. we consider oracle virtual private database, different encription options and possible security breaches and their mitigation path.
The presentation describes what is Apache Solr, how it could be used. There is apache solr overview, performance tuning tips and advanced features description
In Memory Database In Action by Tanel Poder and Kerry OsborneEnkitec
The document discusses Oracle Database In-Memory option and how it improves performance of data retrieval and processing queries. It provides examples of running a simple aggregation query with and without various performance features like In-Memory, vector processing and bloom filters enabled. Enabling these features reduces query elapsed time from 17 seconds to just 3 seconds by minimizing disk I/O and leveraging CPU optimizations like SIMD vector processing.
Oracle Database 12c - The Best Oracle Database 12c Tuning Features for Develo...Alex Zaballa
Oracle Database 12c includes many new tuning features for developers and DBAs. Some key features include:
- Multitenant architecture allows multiple pluggable databases to consolidate workloads on a single database instance for improved utilization and administration.
- In-memory column store enables real-time analytics on frequently accessed data held entirely in memory for faster performance.
- New SQL syntax like FETCH FIRST for row limiting and offsetting provides more readable and intuitive replacements for previous techniques.
- Adaptive query optimization allows queries to utilize different execution plans like switching between nested loops and hash joins based on runtime statistics for improved performance.
OOW16 - Oracle Database 12c - The Best Oracle Database 12c New Features for D...Alex Zaballa
This document provides an overview of new features in Oracle Database 12c for developers and DBAs. It begins with an introduction by Alex Zaballa and then covers several new features including native support for JSON, data redaction, row limits and offsets for SQL queries, PL/SQL functions callable from SQL, session level sequences, and temporary undo. The document includes demonstrations of many of these new features.
This is a recording of my Advanced Oracle Troubleshooting seminar preparation session - where I showed how I set up my command line environment and some of the main performance scripts I use!
DBA Commands and Concepts That Every Developer Should KnowAlex Zaballa
DBA Commands and Concepts That Every Developer Should Know was presented by Alex Zaballa, an Oracle DBA with experience in Brazil and Angola. The presentation covered Oracle Flashback Query, Flashback Table, RMAN table recovery, pending statistics, explain plan, DBMS_APPLICATION_INFO, row-by-row vs bulk processing, Virtual Private Database, extended data types, SQL text expansion, identity columns, UTL_CALL_STACK, READ privileges vs SELECT privileges, and online table redefinition. The presentation included demonstrations of many of these concepts.
Oracle Database 12.1.0.2 introduced several new features including approximate count distinct, full database caching, pluggable database (PDB) improvements like cloning and state management, JSON support, data redaction, SQL query row limits and offsets, invisible columns, SQL text expansion, calling PL/SQL from SQL, session level sequences, and extended data types support.
OTN TOUR 2016 - DBA Commands and Concepts That Every Developer Should KnowAlex Zaballa
This document contains a summary of an Oracle DBA presentation on DBA commands and concepts that every developer should know. The presentation covered topics such as parallel queries, row chaining, explain plans, flashback queries, pending statistics, bulk processing, virtual private databases, extended data types, identity columns, and online table redefinition. It provided examples and demonstrations of many of these commands and concepts.
Flex Cluster e Flex ASM - GUOB Tech Day - OTN TOUR LA Brazil 2014Alex Zaballa
The document discusses Oracle Flex Cluster and Flex ASM configurations. A Flex Cluster allows running Oracle databases on hub and leaf nodes, where leaf nodes do not require direct access to storage. It also discusses converting existing clusters to Flex Clusters and Flex ASM. Key aspects covered include the use of Grid Naming Service for Flex Clusters, capabilities of hub and leaf nodes, and enhancements in Flex ASM such as larger LUN size support and password file storage in ASM.
The document discusses techniques for analyzing SQL performance on Oracle Exadata systems using tools like ASH, SQL monitoring, and ExaSnapper. It provides examples of using these tools to identify SQL statements that are not optimized for Exadata's Smart Scan feature and determining if problematic statements are long-running queries or frequent short queries. The document also demonstrates how to selectively force full table scans for reporting workloads while keeping indexes available for OLTP workloads.
The presentation in Oracle Technical Carnival China 2016, this is the second presentation about Oracle sharding function that will release in 12.2. In this presentation, described in real case how Oracle construct the sharding table and duplicated table.
DBA Brasil 1.0 - DBA Commands and Concepts That Every Developer Should KnowAlex Zaballa
This document summarizes a presentation on DBA commands and concepts that every developer should know. The presentation covers topics such as parallel processing, explain plans, flashback queries, pending statistics, virtual columns, and online table redefinition. It demonstrates several commands and concepts to help developers better understand database administration tasks.
Longhorn PHP - MySQL Indexes, Histograms, Locking Options, and Other Ways to ...Dave Stokes
This document discusses various ways to speed up queries in MySQL, including the proper use of indexes, histograms, and locking options. It begins with an introduction to indexes, explaining that indexes are data structures that improve the speed of data retrieval by allowing for faster lookups and access to ordered records. The document then covers different types of indexes like clustered indexes, secondary indexes, functional indexes, and multi-value indexes. It emphasizes choosing indexes carefully based on the most common queries and selecting columns that are not often updated. Overall, the document provides an overview of optimization techniques in MySQL with a focus on index usage.
Ukoug15 SIMD outside and inside Oracle 12c (12.1.0.2)Laurent Leturgez
This document discusses SIMD (Single Instruction Multiple Data) instructions both outside and inside Oracle 12c. It provides an overview of SIMD instructions on Intel architectures, how they can improve performance, and how Oracle 12c leverages SIMD registers and instructions for in-memory columnar storage and filtering. The document also discusses how to trace SIMD instruction usage inside Oracle using tools like gdb and systemtap.
Dutch PHP Conference 2021 - MySQL Indexes and HistogramsDave Stokes
This document discusses how to speed up queries in MySQL through the proper use of indexes, histograms, and other techniques. It begins by explaining that the MySQL optimizer tries to determine the most efficient way to execute queries by considering different query plans. The optimizer relies on statistics about column distributions to estimate query costs. The document then discusses using EXPLAIN to view and analyze query plans, and how indexes can improve query performance by allowing faster data retrieval through secondary indexes and other index types. Proper index selection and column data types are important to allow the optimizer to use indexes efficiently.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
Introduction to MySQL Query Tuning for Dev[Op]sSveta Smirnova
To get data, we query the database. MySQL does its best to return requested bytes as fast as possible. However, it needs human help to identify what is important and should be accessed in the first place.
Queries, written smartly, can significantly outperform automatically generated ones. Indexes and Optimizer statistics, not limited to the Histograms only, help to increase the speed of the query a lot.
In this session, I will demonstrate by examples of how MySQL query performance can be improved. I will focus on techniques, accessible by Developers and DevOps rather on those which are usually used by Database Administrators. In the end, I will present troubleshooting tools which will help you to identify why your queries do not perform. Then you could use the knowledge from the beginning of the session to improve them.
Talk at "Istanbul Tech Talks" in Istanbul, April, 17, 2018. http://www.istanbultechtalks.com/
In this talk I will show how to get started with MySQL Query Tuning. I will make short introduction into physical table structure and demonstrate how it may influence query execution time. Then we will discuss basic query tuning instruments and techniques, mainly EXPLAIN command with its latest variations. You will learn how to understand its output and how to rewrite query or change table structure to achieve better performance.
This document discusses SQL skills and how queries can negatively impact server performance if not written efficiently. It covers topics like query plans, execution contexts, using parameters, indexing, handling large datasets, and external influences on SQL performance. Specific "bad" SQL examples are also provided and analyzed. The presenter aims to help developers optimize their SQL and prevent poorly written queries from bringing servers to their knees.
This document discusses various strategies for statement caching in databases to improve performance. It covers server-side caching where databases cache execution plans and client-side caching where applications cache prepared statements. It provides examples of statement caching configuration for Oracle, SQL Server, PostgreSQL and MySQL and shows that caching can significantly improve throughput, with one example seeing a 20% gain.
This document provides an overview and interpretation of the Automatic Workload Repository (AWR) report in Oracle database. Some key points:
- AWR collects snapshots of database metrics and performance data every 60 minutes by default and retains them for 7 days. This data is used by tools like ADDM for self-management and diagnosing issues.
- The top timed waits in the AWR report usually indicate where to focus tuning efforts. Common waits include I/O waits, buffer busy waits, and enqueue waits.
- Other useful AWR metrics include parse/execute ratios, wait event distributions, and top activities to identify bottlenecks like parsing overhead, locking issues, or inefficient SQL.
- Oracle Database 11g Release 2 provides many advanced features to lower IT costs including in-memory processing, automated storage management, database compression, and real application testing capabilities.
- It allows for online application upgrades using edition-based redefinition which allows new code and data changes to be installed without disrupting the existing system.
- Oracle provides multiple upgrade paths from prior database versions to 11g to allow for predictable performance and a safe upgrade process.
The presentation describes what is Apache Solr, how it could be used. There is apache solr overview, performance tuning tips and advanced features description
In Memory Database In Action by Tanel Poder and Kerry OsborneEnkitec
The document discusses Oracle Database In-Memory option and how it improves performance of data retrieval and processing queries. It provides examples of running a simple aggregation query with and without various performance features like In-Memory, vector processing and bloom filters enabled. Enabling these features reduces query elapsed time from 17 seconds to just 3 seconds by minimizing disk I/O and leveraging CPU optimizations like SIMD vector processing.
Oracle Database 12c - The Best Oracle Database 12c Tuning Features for Develo...Alex Zaballa
Oracle Database 12c includes many new tuning features for developers and DBAs. Some key features include:
- Multitenant architecture allows multiple pluggable databases to consolidate workloads on a single database instance for improved utilization and administration.
- In-memory column store enables real-time analytics on frequently accessed data held entirely in memory for faster performance.
- New SQL syntax like FETCH FIRST for row limiting and offsetting provides more readable and intuitive replacements for previous techniques.
- Adaptive query optimization allows queries to utilize different execution plans like switching between nested loops and hash joins based on runtime statistics for improved performance.
OOW16 - Oracle Database 12c - The Best Oracle Database 12c New Features for D...Alex Zaballa
This document provides an overview of new features in Oracle Database 12c for developers and DBAs. It begins with an introduction by Alex Zaballa and then covers several new features including native support for JSON, data redaction, row limits and offsets for SQL queries, PL/SQL functions callable from SQL, session level sequences, and temporary undo. The document includes demonstrations of many of these new features.
This is a recording of my Advanced Oracle Troubleshooting seminar preparation session - where I showed how I set up my command line environment and some of the main performance scripts I use!
DBA Commands and Concepts That Every Developer Should KnowAlex Zaballa
DBA Commands and Concepts That Every Developer Should Know was presented by Alex Zaballa, an Oracle DBA with experience in Brazil and Angola. The presentation covered Oracle Flashback Query, Flashback Table, RMAN table recovery, pending statistics, explain plan, DBMS_APPLICATION_INFO, row-by-row vs bulk processing, Virtual Private Database, extended data types, SQL text expansion, identity columns, UTL_CALL_STACK, READ privileges vs SELECT privileges, and online table redefinition. The presentation included demonstrations of many of these concepts.
Oracle Database 12.1.0.2 introduced several new features including approximate count distinct, full database caching, pluggable database (PDB) improvements like cloning and state management, JSON support, data redaction, SQL query row limits and offsets, invisible columns, SQL text expansion, calling PL/SQL from SQL, session level sequences, and extended data types support.
OTN TOUR 2016 - DBA Commands and Concepts That Every Developer Should KnowAlex Zaballa
This document contains a summary of an Oracle DBA presentation on DBA commands and concepts that every developer should know. The presentation covered topics such as parallel queries, row chaining, explain plans, flashback queries, pending statistics, bulk processing, virtual private databases, extended data types, identity columns, and online table redefinition. It provided examples and demonstrations of many of these commands and concepts.
Flex Cluster e Flex ASM - GUOB Tech Day - OTN TOUR LA Brazil 2014Alex Zaballa
The document discusses Oracle Flex Cluster and Flex ASM configurations. A Flex Cluster allows running Oracle databases on hub and leaf nodes, where leaf nodes do not require direct access to storage. It also discusses converting existing clusters to Flex Clusters and Flex ASM. Key aspects covered include the use of Grid Naming Service for Flex Clusters, capabilities of hub and leaf nodes, and enhancements in Flex ASM such as larger LUN size support and password file storage in ASM.
The document discusses techniques for analyzing SQL performance on Oracle Exadata systems using tools like ASH, SQL monitoring, and ExaSnapper. It provides examples of using these tools to identify SQL statements that are not optimized for Exadata's Smart Scan feature and determining if problematic statements are long-running queries or frequent short queries. The document also demonstrates how to selectively force full table scans for reporting workloads while keeping indexes available for OLTP workloads.
The presentation in Oracle Technical Carnival China 2016, this is the second presentation about Oracle sharding function that will release in 12.2. In this presentation, described in real case how Oracle construct the sharding table and duplicated table.
DBA Brasil 1.0 - DBA Commands and Concepts That Every Developer Should KnowAlex Zaballa
This document summarizes a presentation on DBA commands and concepts that every developer should know. The presentation covers topics such as parallel processing, explain plans, flashback queries, pending statistics, virtual columns, and online table redefinition. It demonstrates several commands and concepts to help developers better understand database administration tasks.
Longhorn PHP - MySQL Indexes, Histograms, Locking Options, and Other Ways to ...Dave Stokes
This document discusses various ways to speed up queries in MySQL, including the proper use of indexes, histograms, and locking options. It begins with an introduction to indexes, explaining that indexes are data structures that improve the speed of data retrieval by allowing for faster lookups and access to ordered records. The document then covers different types of indexes like clustered indexes, secondary indexes, functional indexes, and multi-value indexes. It emphasizes choosing indexes carefully based on the most common queries and selecting columns that are not often updated. Overall, the document provides an overview of optimization techniques in MySQL with a focus on index usage.
Ukoug15 SIMD outside and inside Oracle 12c (12.1.0.2)Laurent Leturgez
This document discusses SIMD (Single Instruction Multiple Data) instructions both outside and inside Oracle 12c. It provides an overview of SIMD instructions on Intel architectures, how they can improve performance, and how Oracle 12c leverages SIMD registers and instructions for in-memory columnar storage and filtering. The document also discusses how to trace SIMD instruction usage inside Oracle using tools like gdb and systemtap.
Dutch PHP Conference 2021 - MySQL Indexes and HistogramsDave Stokes
This document discusses how to speed up queries in MySQL through the proper use of indexes, histograms, and other techniques. It begins by explaining that the MySQL optimizer tries to determine the most efficient way to execute queries by considering different query plans. The optimizer relies on statistics about column distributions to estimate query costs. The document then discusses using EXPLAIN to view and analyze query plans, and how indexes can improve query performance by allowing faster data retrieval through secondary indexes and other index types. Proper index selection and column data types are important to allow the optimizer to use indexes efficiently.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
Introduction to MySQL Query Tuning for Dev[Op]sSveta Smirnova
To get data, we query the database. MySQL does its best to return requested bytes as fast as possible. However, it needs human help to identify what is important and should be accessed in the first place.
Queries, written smartly, can significantly outperform automatically generated ones. Indexes and Optimizer statistics, not limited to the Histograms only, help to increase the speed of the query a lot.
In this session, I will demonstrate by examples of how MySQL query performance can be improved. I will focus on techniques, accessible by Developers and DevOps rather on those which are usually used by Database Administrators. In the end, I will present troubleshooting tools which will help you to identify why your queries do not perform. Then you could use the knowledge from the beginning of the session to improve them.
Talk at "Istanbul Tech Talks" in Istanbul, April, 17, 2018. http://www.istanbultechtalks.com/
In this talk I will show how to get started with MySQL Query Tuning. I will make short introduction into physical table structure and demonstrate how it may influence query execution time. Then we will discuss basic query tuning instruments and techniques, mainly EXPLAIN command with its latest variations. You will learn how to understand its output and how to rewrite query or change table structure to achieve better performance.
This document discusses SQL skills and how queries can negatively impact server performance if not written efficiently. It covers topics like query plans, execution contexts, using parameters, indexing, handling large datasets, and external influences on SQL performance. Specific "bad" SQL examples are also provided and analyzed. The presenter aims to help developers optimize their SQL and prevent poorly written queries from bringing servers to their knees.
This document discusses various strategies for statement caching in databases to improve performance. It covers server-side caching where databases cache execution plans and client-side caching where applications cache prepared statements. It provides examples of statement caching configuration for Oracle, SQL Server, PostgreSQL and MySQL and shows that caching can significantly improve throughput, with one example seeing a 20% gain.
This document provides an overview and interpretation of the Automatic Workload Repository (AWR) report in Oracle database. Some key points:
- AWR collects snapshots of database metrics and performance data every 60 minutes by default and retains them for 7 days. This data is used by tools like ADDM for self-management and diagnosing issues.
- The top timed waits in the AWR report usually indicate where to focus tuning efforts. Common waits include I/O waits, buffer busy waits, and enqueue waits.
- Other useful AWR metrics include parse/execute ratios, wait event distributions, and top activities to identify bottlenecks like parsing overhead, locking issues, or inefficient SQL.
- Oracle Database 11g Release 2 provides many advanced features to lower IT costs including in-memory processing, automated storage management, database compression, and real application testing capabilities.
- It allows for online application upgrades using edition-based redefinition which allows new code and data changes to be installed without disrupting the existing system.
- Oracle provides multiple upgrade paths from prior database versions to 11g to allow for predictable performance and a safe upgrade process.
OrientDB v2.2 introduces several new features including live queries, parallel queries, command caching, sequences, incremental backups, improved security features, an easier distributed configuration, load balancing strategies, and SQL commands for managing high availability configurations. It also introduces the Teleporter tool for migrating data from relational databases like Oracle, SQL Server, MySQL, and PostgreSQL into OrientDB.
Presentation delivered by Matt Done, Head Of Platform Development at expanz Pty. Ltd. during DDD Sydney event on 2 July 2011.
Matt demonstrates what it takes to setup a highly sophisticated load test, using the Azure environment and how to use the results to optimise a fully blown application development platform and application server running on Azure.
Recording of this presentation can be found at www.youtube.com/expanzTV
Best Practices for Building Robust Data Platform with Apache Spark and DeltaDatabricks
This talk will focus on Journey of technical challenges, trade offs and ground-breaking achievements for building performant and scalable pipelines from the experience working with our customers.
This document provides an overview of performance tuning the MySQL server. It discusses where to find server configuration and status information, how to analyze what the database is doing using status variables, and which configuration variables can be tuned for optimization, including global, per-session, and storage engine variables. Key areas covered include memory usage, query analysis, indexing strategies, and tuning storage engines like InnoDB and MyISAM.
Dataswft has been running several benchmarks at Intel Labs, Bangalore. Dataswft team is immensely thankful to the technical staff at Intel Labs Bangalore for providing access to their facilities and guidance.
This presentation presents cost effective options when running large workloads on Hadoop and the benefits using Dataswft.
Investigate SQL Server Memory Like Sherlock HolmesRichard Douglas
The document discusses optimizing memory usage in SQL Server. It covers how SQL Server uses memory, including the buffer pool and plan cache. It discusses different memory models and settings like max server memory. It provides views and queries to monitor memory usage and pressure, and describes techniques to intentionally create internal memory pressure to encourage plan cache churn.
All (that i know) about exadata externalPrasad Chitta
This document discusses Oracle's Exadata database appliance. It begins by contrasting software-defined systems with engineered appliances like Exadata. It then describes Exadata X3 specifications, architectural considerations for resource management and performance, and specific Exadata features like Smart Scan and storage indexes. Recommendations are provided for workload management and SQL. A case study shows significant performance improvements from migrating applications to Exadata. Some criticisms of Exadata are also mentioned before concluding that Exadata is well-suited for database consolidation.
OracleStore: A Highly Performant RawStore Implementation for Hive MetastoreDataWorks Summit
Today, Yahoo! uses Hive in many different spaces, from ETL pipelines to adhoc user queries. Increasingly, we are investigating the practicality of applying Hive to real-time queries, such as those generated by interactive BI reporting systems. In order for Hive to succeed in this space, it must be performant in all aspects of query execution, from query compilation to job execution. One such component is the interaction with the underlying database at the core of the Metastore.
As an alternative to ObjectStore, we created OracleStore as a proof-of-concept. Freed of the restrictions imposed by DataNucleus, we were able to design a more performant database schema that better met our needs. Then, we implemented OracleStore with specific goals built-in from the start, such as ensuring the deduplication of data.
In this talk we will discuss the details behind OracleStore and the gains that were realized with this alternative implementation. These include a reduction of 97%+ in the storage footprint of multiple tables, as well as query performance that is 13x faster than ObjectStore with DirectSQL and 46x faster than ObjectStore without DirectSQL.
Tuning the Applications Tier, Concurrent Manager, Client/Network, and Database Tier are discussed to provide an overview of performance methodology for optimizing the E-Business Suite. The presentation outlines best practices for tuning each layer including the applications tier, concurrent manager, database tier, and applications. Specific techniques are provided for optimizing forms, the Java stack, concurrent processing, network traffic, database configuration, I/O, statistics gathering, and performance monitoring using tools like AWR.
Top 5 things to know about sql azure for developersIke Ellis
This document summarizes the top 5 things to know about SQL Azure:
1. The connection string is used to connect applications to SQL Azure databases and includes elements like username, password, database name, and server name.
2. SQL Azure can return new error codes related to excessive locks, tempdb usage, log size, memory usage, and long running transactions. Application code needs retry logic to handle errors.
3. Tools for working with SQL Azure include the SQL Azure Database Manager, SQL Server Management Studio, and wizards for database migration.
4. Databases can be migrated from on-premises SQL Server to SQL Azure using SSIS, BCP or scripts generated in SSMS.
5
1) The document provides an overview of 5 key things developers should know about SQL Azure, including how to set up a connection string, issues of throttling and errors, tools for migration and management, and performance tuning tips.
2) Throttling occurs when a session acquires too many locks or resources and causes errors like 40501, and developers need retry logic to handle errors and disconnects.
3) Tools for SQL Azure include the SQL Azure Database Manager, SSMS 2008 R2, and the migration wizard.
4) Performance can be improved by addressing indexing, minimizing round trips, using connection pooling, and caching/batching data access.
ASH and AWR Performance Data by Kellyn Pot'VinEnkitec
This document provides an overview of Ash and AWR performance data. It discusses the history and purpose of ASH and AWR, how the AWR repository works, what data is contained in ASH samples, and how to run various ASH and AWR reports through the command line and Enterprise Manager. Specific examples are given around using ASH and AWR data to diagnose a blocking session issue on a RAC database. Best practices for querying ASH data directly are also covered.
SQL Server Wait Types Everyone Should KnowDean Richards
Many people use wait types for performance tuning, but do not know what some of the most common ones indicate. This presentation will go into details about the top 8 wait types I see at the customers I work with. It will provide wait descriptions as well as solutions.
This document discusses various issues encountered with a PostgreSQL database at InMobi. It includes discussions around high user connections, idle transactions, long-running queries, temporary file limits, out of memory errors, replication issues, partitions, tablespaces, SSH tunneling, and miscellaneous other topics. Potential solutions are provided around increasing connection pools, killing idle transactions, analyzing query plans, increasing configuration parameters, and ensuring proper replication setup.
This is the presentation on ASH that I did with Graham Wood at RMOUG 2014 and that represents the final best effort to capture essential and advanced ASH content as started in a presentation Uri Shaft and I gave at a small conference in Denmark sometime in 2012 perhaps. The presentation is also available publicly through the RMOUG website, so I felt at liberty to post it myself here. If it disappears it would likely be because I have been asked to remove it by Oracle.
The document discusses troubleshooting performance issues for SQL Server. It begins with an introduction and case study on the MS Society of Canada's website. It then discusses optimizing the environment, using Performance Monitor (PerfMon) to monitor performance, and concludes with recommendations to address issues like high CPU usage, slow disk speeds, and insufficient memory.
The presentation explains how to setup rate limits, how to work with 429 code, how rate limits are implemented in kubernetes, cni, loadbalancer and so on
The document discusses implementing research and development (RnD) teams in enterprises. It begins by defining different types of RnD, including scientific, industrial, and IT RnD. It then discusses important considerations for top management in establishing a successful RnD program, including committing to consume RnD deliverables and establishing an RnD committee and processes. The document outlines the lifecycle of an RnD project from MVP to product development. It provides guidance on setting up effective RnD teams and processes, such as prioritizing MVPs, establishing roles like the RnD lead, and ensuring a smooth handoff to product teams.
1. The document proposes a low-code solution for billing in a private cloud using open-source tools like KillBill and Prometheus.
2. It outlines an initial architecture that would ingest usage metrics from products, aggregate the data, and publish billing events to KillBill for invoicing and payments.
3. Exporters would collect metrics from products like S3 and ingress and expose them in a format readable by Prometheus for long-term storage and analysis by the billing system.
the presentation is about federated GraphQL in huge enterprises. I explain why and what for big enterprises need distributed GraphQL and classic one does not work.
This document discusses the importance of implementing FinOps practices to optimize cloud spending. FinOps advocates for collaborative work between development, operations, and finance teams to provide transparency into infrastructure costs, optimize resource utilization, and balance speed of development with cloud efficiency. The document outlines why FinOps is needed due to rising cloud bills and lack of visibility. It proposes implementing tagging, metrics, and recommendation systems to allocate costs and identify optimization opportunities in a decentralized manner. FinOps requires cultural and process changes, as well as open source tooling, to establish a collaborative cost management approach.
The document discusses big data and open-source relational database management systems (RDBMSs). It begins by introducing the speaker and their background in cloud architecture and enterprise databases. The rest of the document summarizes various RDBMS options for big data, including PostgreSQL and extensions like Citus and Timescale, as well as columnar databases like Greenplum and Clickhouse. It compares the systems based on their SQL support, integration capabilities, data ingestion styles and performance, support for wide tables versus star schemas, horizontal scaling abilities, and availability as managed cloud services.
The majority of cloud-based DWH provides a wide range of migration tools from in-house DWH. However, I believe that cloud migration success is based not only on reducing infrastructure maintenance costs, but also on additional performance profit inherited from tailored data model.
I am going to prove that copying star or snowflake schemas as is will not lead to maximum performance boost in such DWH as Amazon Redshift and Google BigQuery. Moreover, this approach may cause additional cloud expenses.
We will discuss why data models should be different for each particular database, and how to get maximum performance from database peculiarities.
Most of performance tuning techniques for cloud-based DWH are about adding extra nodes to cluster, but it may lead to performance degradation in some cases, as well as extra costs burden. Sometimes, this approach allows to get maximum speed from current hardware configuration, may be even less expensive servers.
I will show some examples from production projects with extra performance using lower hardware, and edge cases like huge wide fact table with fully denormalized dimensions instead of classical star schema.
This document discusses in-memory business intelligence (BI) solutions. It describes different types of in-memory databases and data visualization tools that can be used for BI, including both open source and commercial options. Key open source in-memory databases mentioned are Tarantool, Apache Ignite, and Arenadata. For data visualization, open source tools discussed include Pentaho Reporting, Saiku BI, Apache Zeppelin, and Reporting Server. The document provides pros and cons of different solutions and recommends using a community edition of an in-memory database along with an open source or enterprise data visualization toolkit for a fully open source in-memory BI setup.
Alexander Tokarev is a database performance architect who gave a presentation on his experience using Oracle In-Memory technology for a faceted search project. The project involved tagging 3 million objects with 42 million tags, which was loaded into an Oracle database. Initial performance testing without In-Memory showed slow query speeds. After implementing Oracle In-Memory, query performance improved up to 21 times faster. However, Tokarev discovered that not all Oracle In-Memory features provided significant benefits and some caused issues. With tuning, the final In-Memory implementation led to a 4x overall performance boost for the faceted search queries.
This document discusses Alexander Tokarev's presentation on Oracle In-Memory from his experience working on performance tuning and proof of concepts. He worked on a faceted search project that saw a 5x performance boost from implementing Oracle In-Memory technology. Key findings included that not all data needs to be loaded into memory and understanding Oracle In-Memory internals like compression settings and repopulation processes are important for optimizing performance.
The presentation describes how to design robust solution for tagging search, how to use tagging for faceted search. Various architecture and data patterns are considered. We discuss relational databases like Oracle, full text search servers like Apache Solr. We will see how Oracle 18c features permit to use embedded faceted search.
This document compares different approaches for storing tags in a database to enable high performance search and retrieval. It discusses normalization, denormalization, using complex data types, and full-text search approaches. Benchmarks on initialization time, storage size, search speeds and maintenance requirements are provided based on a test using StackOverflow post data. The conclusion is that no single approach is best and the optimal solution depends on factors like performance needs, data volume, and engineer experience. Understanding how the database handles different models is important for choosing the right fit.
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
SMS API Integration in Saudi Arabia| Best SMS API ServiceYara Milbes
Discover the benefits and implementation of SMS API integration in the UAE and Middle East. This comprehensive guide covers the importance of SMS messaging APIs, the advantages of bulk SMS APIs, and real-world case studies. Learn how CEQUENS, a leader in communication solutions, can help your business enhance customer engagement and streamline operations with innovative CPaaS, reliable SMS APIs, and omnichannel solutions, including WhatsApp Business. Perfect for businesses seeking to optimize their communication strategies in the digital age.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
When it is all about ERP solutions, companies typically meet their needs with common ERP solutions like SAP, Oracle, and Microsoft Dynamics. These big players have demonstrated that ERP systems can be either simple or highly comprehensive. This remains true today, but there are new factors to consider, including a promising new contender in the market that’s Odoo. This blog compares Odoo ERP with traditional ERP systems and explains why many companies now see Odoo ERP as the best choice.
What are ERP Systems?
An ERP, or Enterprise Resource Planning, system provides your company with valuable information to help you make better decisions and boost your ROI. You should choose an ERP system based on your company’s specific needs. For instance, if you run a manufacturing or retail business, you will need an ERP system that efficiently manages inventory. A consulting firm, on the other hand, would benefit from an ERP system that enhances daily operations. Similarly, eCommerce stores would select an ERP system tailored to their needs.
Because different businesses have different requirements, ERP system functionalities can vary. Among the various ERP systems available, Odoo ERP is considered one of the best in the ERp market with more than 12 million global users today.
Odoo is an open-source ERP system initially designed for small to medium-sized businesses but now suitable for a wide range of companies. Odoo offers a scalable and configurable point-of-sale management solution and allows you to create customised modules for specific industries. Odoo is gaining more popularity because it is built in a way that allows easy customisation, has a user-friendly interface, and is affordable. Here, you will cover the main differences and get to know why Odoo is gaining attention despite the many other ERP systems available in the market.
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
1. 100500 способов
кэширования в Oracle
Database или как достичь
максимальной скорости
обработки запросов
минимальной ценой
Токарев Александр
DataArt
2. Agenda
• Database caches
• Result cache
• Result cache in DBMSs different from Oracle
• Hand-made Oracle result cache implementation
• Embedded Oracle result cache implementation
• Performance tests
• Limitations and caveats
• Cases
• Conclusion
4. Database caches
• Buffer cache – cache for data pages/data blocks
• Statement cache – cache of queries plan
• Result cache – rows from queries
• OS cache
5. Retailer case
DWH report
Oracle 11
20 Tb
300 users
20 min
350 distinct SKU
5000 rows
Select sku_id,
Shop_id,
sku_detail(sku_id),
…..
from dim_sales
where ….
Order by shop_id……..
Create or replace
function
sku_detail(sku_id
number) return
number is
Select 1
If Select 2
Else Select 3
…
…
…
Select 30
End;
400 lines of
SQL+PL/SQL
0.2 second per SKU
5000 * 0.2 = 1000 seconds
6. Retailer case Hand-made cache
DWH
report
Oracle 11
20 Tb
300 users
4 min
350 distinct SKU
5000 rows
Select sku_id,
Shop_id,
sku_detail(sku_id),
…..
from dim_sales
where ….
Order by shop_id……..
Create or replace function
sku_full(sku_id number)
return number is
Select 1
If Select 2
Else Select 3
…
…
…
Select 30
End;
400 lines of SQL+PL/SQL
0.2 second per SKU
350* 0.2 = 70 seconds
CREATE PACKAGE BODY cache_sku AS
TYPE sku_cache_aat IS TABLE OF number INDEX BY PLS_INTEGER;
cache sku_cache_aat;
end cache_sku;
cache
FUNCTION sku_detail(sku
number) RETURN number IS
BEGIN
IF NOT cache.EXISTS(sku) THEN
cache(sku) := sku_full(sku);
END IF;
RETURN cache(sku);
END sku_detail;
8. Hand-made cache
Pros:
- Very fast
- Easy to implement
- No configuration efforts
- No intra-process sync logic burden
Cons:
- Cache consumes expensive memory from DB
- Memory is allocated per-session basis
- PL/SQL or other DB stored logic is required
- Vendor specific
- No automatic invalidation
11. Case 2 Recommendation engine
Oracle
main
Oracle
DG
Application
server
Application
server
Load
balancer
Client
browser
4000
users
10000
requests
per
second
In-
memory
cluster
12. Case 2 Recommendation engine
Recommendation rules
1. 10 best recommendations by text match
2. Multilanguage capabilities
3. Should be taken from 12 previous recognized documents of the client
4. If there is no documents – from all clients of the same industry
5. If no in same industry – from clients similar by margin and e.t.c
max
100
rows
2-3 columns
max 100 users
13. Case 2 Recommendation engine
1 week before the Release
1. Recommendations work slow – 5 minutes for 1 document
2. Code freeze
14. Case 2 Recommendation engine
Solution
1. Use database to cache queries
2. Use Oracle Database Result Cache
Why
1. SQL to get recommendation works 0.5 sec, no options for query
tuning – Oracle full text search engine + it is really heavy SQL
2. Same parameters appear at least 5-10 times – cache will be used
3. Data to get recommendations is refreshed 1 hour basis
4. PL/SQL is prohibited
15. Oracle Result Cache
Oracle result cache
1. Memory area to share query result sets
2. Read consistent – auto invalidation during DML
3. Automatic dependency tracking
4. Minimal changes in the application
5. There is an option how not to change application
6. Could cache PL/SQL logic as well
26. Oracle Result Cache Invalidation
Cache is ignored for current session
Good for others sessions
27. Oracle Result Cache Invalidation
Invalidated after commit for others sessions!
28. Oracle Result Cache Invalidation
Unexpected cache invalidation
1. SELECT FOR UPDATE statement even there were no changes at all
2. an unindexed foreign key + delete/update/insert a record from
parent table
3. Update/delete statements in main table with no records affected +
an update to any table where rows were affected.
P.S. Result Cache doesn’t track partitions even if a result cache query
works with only 1 partition. Table level tracking always.
29. Case 2 Recommendation engine
Final solution
1. Do not use annotations – not all queries should
be cached
2. Use /*+ result_cache*/ for long-running query
3. Performance is tested. Document recognition –
30 seconds.
Time for production
30. Case 2 Recommendation engine Early morning
Level 3 support
Production incident
Severity 1
Users can’t provide document recognition. Recognition takes 20 minutes
at least. Sessions hangs.
Regards, L2 support team.
31. Case 2 Recommendation engine
• Active user count: 400
• Database active session count: 1200 = 400* 3
• Row count: 500
• Columns count: 5-8
X5 more!
32. Monitoring features
View Name Description
V$RESULT_CACHE_STATISTICS Lists cache settings and memory usage statistics
V$RESULT_CACHE_MEMORY Lists all the memory blocks and corresponding
statistics
V$RESULT_CACHE_OBJECTS Lists all the objects(cached results and
dependencies) along with their attributes
V$RESULT_CACHE_DEPENDENCY Lists the dependency details between the cached
results and dependencies
V$SQLAREA Lists SQL statements issued inside Oracle database
33. Management features
Procedure Name Description
BYPASS Instruction to ignore result cache: for current
session or for all DB
FLUSH Clean cache
MEMORY_REPORT Memory detail report
STATUS Checks the status
INVALIDATE Invalidates the specified result-set object
Package: DBMS_RESULT_CACHE
35. Case 2 Recommendation engine Investigations
Strange queries for 40 small tables each minute:
ETL
36. Case 2 Recommendation engine Investigations
Result cache annotation
Still 20 minutes per document
37. Case 2 Recommendation engine
We have received very positive feedback about Oracle Adaptive Statistic feature from customer with respect to adaptive
plans. It has proved to be very able at improving system performance for a huge range of workloads. (c) Oracle
20000 queries
10 minutes per document!!!
Via bug? WTF!!!
38. Result cache latches
Latches are Oracle-internal low-level locks that protect the memory
structures of the system global area (SGA) against simultaneous
accesses.
40. Result cache latches Type 1
When sets
First row of dataset is placed in Result Cache
When release
Last row of dataset is placed in Result Cache
Who waits
Sessions with same SQL which requested the latch
How much
_RESULT_CACHE_TIMEOUT – 10 seconds. Next - result cache bypassed.
41. Result cache latches Type 2
When sets
First row of dataset is requested from Result Cache
When release
Last row of dataset is read from Result Cache
Who waits
Sessions with same SQL which requested the latch
How much
It depends
42. Result cache latches
Latches not only makes SQL to wait but consumes CPU.
There is no options to get rid of result cache latches – slow for
concurrent environment..
Be ready to convince DBA latches wait time saves DB time.
43. Result cache statistics
NAME VALUE
Block Size (Bytes) 1024
Block Count Maximum 4096
Block Count Current 4096
Result Size Maximum
(Blocks) 204
Create Count Success 500
Create Count Failure 0
Find Count 20000
Invalidation Count 10000
Delete Count Invalid 155
Delete Count Valid 14000
Hash Chain Length 1
Find Copy Count 1770
Latch (Share) 0
They are equal – the cache is full!!!
Proper results are deleted
44. Memory estimate
Result Cache Size = row width (bytes)* expected row count
NAME VALUE
Block Size (Bytes) 1024
Block Count Maximum 4096
Block Count Current 4096
Memory allocated by blocks!!!
Result Cache Size = block size (if fits in 1024) * expected row count
46. Administration
Parameter Purpose
RESULT_CACHE_MAX_SIZE memory allocated to the server result cache in bytes. default – 0 bytes
RESULT_CACHE_MAX_RESULT maximum amount of server result cache memory (in percent) that can be
used for a single result. The default value is 5%.
RESULT_CACHE_MODE Default is MANUAL which means that the cache should be requested
explicitly via the RESULT_CACHE hint
_RESULT_CACHE_TIMEOUT
(undocumented)
Maximum time a session request for a latch. Default 10 sec.
6 minutes per document!!!
47. Case 2 Recommendation engine
NAME VALUE
Block Size (Bytes) 1024
Block Count Maximum 4096
Block Count Current 4096
Result Size Maximum
(Blocks) 204
Create Count Success 500
Create Count Failure 0
Find Count 20000
Invalidation Count 10000
Delete Count Invalid 155
Delete Count Valid 14000
Hash Chain Length 1
Find Copy Count 1770
Latch (Share) 0
A lot of updates on source tables
49. Final statistics for result cache
40 seconds per document!!!
NAME VALUE
Block Size (Bytes) 1024
Block Count Maximum 4096
Block Count Current 4096
Result Size Maximum
(Blocks) 204
Create Count Success 500
Create Count Failure 0
Find Count 20000
Invalidation Count 10000
Delete Count Invalid 155
Delete Count Valid 14000
Hash Chain Length 1
Find Copy Count 1770
Latch (Share) 0
NAME VALUE
Block Size (Bytes) 1024
Block Count Maximum 8192
Block Count Current 6000
Result Size Maximum
(Blocks) 204
Create Count Success 1000
Create Count Failure 0
Find Count 20000
Invalidation Count 30
Delete Count Invalid 155
Delete Count Valid 0
Hash Chain Length 1
Find Copy Count 1770
Latch (Share) 0
50. Case 2 Recommendation engine Auto-expiring
SHELFLIFE = read-consistent result life time in seconds
SNAPSHOT = NON-read-consistent result life time in seconds
51. Restrictions
• Dictionary tables/views (sys. schema)
• Temporary and external tables
• Sequences (nextval and curval columns)
• Non-deterministic SQL functions:
current_date, current_time, local_timestamp, sys_guid…
• Non-deterministic PL/SQL function:
dbms_random, hand-written, …
• Pipelined functions (returning rowsets)
• Only IN parameter with simple data types: no CLOB, BLOB, records, objects,
collections, ref cursors
• The same for return result
52. Result cache inside Oracle
Where in Oracle
Jobs related stuff
SELECT /*+ NO_STATEMENT_QUEUING RESULT_CACHE (SYSOBJ=TRUE) */
OBJ#,SCHEDULE_LMT,PRIO,JOB_WEIGHT FROM "SYS"."SCHEDULER$_PROGRAM" WHERE bla-bla-bla
APEX
SELECT /*+result_cache*/ NAME, VALUE FROM WWV_FLOW_PLATFORM_PREFS
WHERE NAME IN ( 'QOS_MAX_WORKSPACE_REQUESTS', 'QOS_MAX_SESSION_REQUESTS', bla-bla-bla
select *
from v$sqlarea
where upper(sql_fulltext) like
'%RESULT_CACHE%‘
55. Database result cache pros&cons
Pros:
- Minimal or no intervention at all into application code
- No DB stored logic required
- Read consistency
- Fast in certain scenarios
Cons:
- Cache consumes expensive memory from database
- Should be properly set up
- Sometimes could lead even to performance degradation
- Vendor specific
57. 1. Не расчитан размер кэша Troubleshooting Latch Free (Result Cache: RC Latch) Issues When The Result Cache is Full (Doc ID 2143739.1)
2. Блокировки Patch 14665745: DBMS_RESULT_CACHE.MEMORY_REPORT LOCKS OUT WRITERS TO THE RESULT CACHE
Bug 19846066 : LATCH FREE IN RESULT CACHE WHEN QUERYING V$RESULT_CACHE_OBJECTS
Patch 14665745: DBMS_RESULT_CACHE.MEMORY_REPORT LOCKS OUT WRITERS TO THE RESULT CACHE
58. We are not alone
Result_cache_max_size /*+ result_cache*/ removed or
dbms_result_cache.add_to_black_list or
/*+ no_result_cache*/
59. We are not alone: Lessons learned
Best approach to roll out updates:
1. Adjust result cache memory
2. Disable cache before bulk loading
dbms_result_cache.bypass;
data ingestion scripts;
Issue dbms_result_cache.bypass(false);
60. Client side result cache
DB
Client cache is ON
Client driver
2. Configuration message
Connection thread 1
Connection
thread 2
Result cache
3. SQL
Statistics messages
1. connect
1. connect 3.
Cached
SQL 1
4. Results
4. Results
5. Cached
SQL 1
6. Results
CACHE SIZE
61. Client side result cache Invalidation Case 1
DB
Client cache is ON
Client driver
Connection thread 1
Result cache
1. non-cached SQL
2. Invalid resultset list
2. Results
t last cached SQL 1 < Invalidation lag
Invalidation rules = Invalidation rules for Server Side Result Cache
Invalidation
62. Client side result cache Invalidation Case 1
DB
Client cache is ON
Client driver
Result cache
1. Get invalid result set list
2. Invalid result set list
Current time = t Invalidation message + Invalidation lag
Invalidation rules = Invalidation rules for Server Side Result Cache
Invalidation
63. Client side result cache Configuration
Parameter Purpose
CLIENT_RESULT_CACHE_LAG maximum time in milliseconds that client result cache can lag behind
changes in the database that affect its result sets. Default 3000 milliseconds
CLIENT_RESULT_CACHE_SIZE the maximum size of the client result set cache for each client process.
Default 0 – not active, min - 32KB, max – 2G
66. Client side result cache
NAME VALUE
Block Size (Bytes) 256
Block Count Maximum 256
Block Count Current 3
Create Count Success 1
Create Count Failure 0
Find Count 9
Invalidation Count 0
Delete Count Invalid 0
Delete Count Valid 0
= 10
67. Client side Result cache pros&cons
Pros:
- Cheap client memory
- JDBC and .NET drivers
- Minimal or no intervention at all into application code
- Significant CPU, I/O, network roundtrip reduction
- No extra caching layer/API is required
- No latches
Cons:
- Eventual read consistency with delay
- Oracle OCI client should be installed
- Vendor specific
- 2 Gb per client limitation
- Not enough information about production
68. Hand-made cache bad scenario
• Cache invalidation in case of data changes is a must
• Database stored logic isn’t in favor
• There is strong database developers team
• PL/SQL business logic is already in place
• There are limitations which don’t permit others caching techniques
Hand-made cache good scenario
69. Server side Result cache bad scenario
• SQL populates a large number of distinct result sets
• SQL statement takes more than _RESULT_CACHE_TIMEOUT
• Cached results are requested very often from many sessions
Result cache good scenario
• Queries have a limited number of possible result sets
• Result sets are relatively small (200-300 rows)
• SQL statements are relatively expensive
• Queries run against relatively static tables
• There is a strong DBA
70. Client side Result cache bad scenario
• Instant cache invalidation in case of data changes is a must
• Thin drivers are required
• There is fine middle-tier developers team
• Middle tier uses a lot of SQL without any caching layer
• There are DB server hardware limitations
Hand-made cache good scenario
71. Conclusion
1. Estimate memory size properly:
volume (Mb) = (
number of result rows * block size+
avg number of apex usage +
avg number adaptive statistic usage
)/1024
2. Add auto-cleaning capabilities with (snapshot + shelflife) options
3. Bypass the cache during bulk data changes
4. Adjust _result_cache_timeout to expected queries duration
5. Never use FORCE mode for all database
6. Check does FORCE used as expected in table annotations
7. Decide about adaptive statistics: _optimizer_ads_use_result_cache = false
Добрый день. Меня зовут Александр и я занимаюсь в компании DataArt вопросами, связаными с базами данных как в части построения систем «с нуля», так и оптимизации имеющихся.
Итак, у нас сегодня штатная презентация по архитектуре очередного решения от СУБД Oracle для ускорения данных.
Сегодня я буду рассказывать многих важных вещах таких как...
Хотя не, слишком скучно.
Ценность данной конференции в том, что тут рассказывается не о технических деталях, которые можно найти в google, а о практическим примерах их использования и их нюансах
В данном докладе я буду рассказывать как устроена технология server side Result cache и чем она лучше чем самодельное кэширование на plsql на примерах двух проектов Компании DataArt. Далее мы подытожим результаты этих проектов и выработаем некие подходы к правильной работе с данной технологией. Так же очень поверхностно посмотрим, что могут предложить другие СУБД относительно кэширования результатов запросов.
Вооруженные набором знаний мы попробуем понять причины сбоя в российской cloud системе расчёта лояльности, который имел недавно место быть по причине того самого result cache.
У Оракла есть client side result cache, я поверхностно расскажу об его архитектуре, но без детализации – он сам по-себе тема отдельного доклада.
Есть 3 основных вида кэшей в базах данных: кэш данных, кэш операторов и их планов и кэш результатов строк. Интересно заметить, что последний пункт из известных мне БД остался только в Oracle. В postgress result cache нет. Он присутствует только в стороннем продукте pgpool. Это связано с некими сложностями, которые мы рассмотрим ниже.
Итак. Кейс 1. Хранилище ретейлера.
Было хранилище и в нём был отчёт. Получение его занимало около 20 минут и пользователи печалились. В чем была интрига данного отчёта? В нём на 5000 строк данных было 350 уникальных товаров, но для каждой строчки вызывалась функция получения информациии по товару. Функция по коду была довольно сложная, тяжело поддающаяся рефакторингу и многие вещи в ней было просто боязно переписывать. Так как система находилась на поддержке, то использовать что-то новое типа embedded result cache было запрещено, поэтому был использован стандартный подход с hand-made кэшированием.
Итак, мы переименовали долгую функцию, а вместо нее создали пакет и функцию, которая использует ассоциативный массив в данном пакете. Данный массив это фактически on demand cache. Если в нём данных нет, то происходит вызов функции.
Важно понимать, в какой из областей памяти расположены коллекции. Помещаются они в области памяти, которая называется PGA, выделяемой под каждую сессию. Именно это определяет их достоинства и недостатки.
Итак, самодельные кэши.
Плюсы очевидны: легко запрограммировать, никакой конфигурации, нет необходимости думать о синхронизации, да они просто быстрые!
Минусы тоже понятны: если в проекте запрещена хранимая логика их невозможно использовать, нет механизма автоматической инвалидации и так как память на кэш выделяется в рамках одной сессии БД, а не экземпляра, то её потребление завышено. Более того, в случае с вариантом использования connection pool необходимо не забывать сбрасывать кэши если для каждой сессии кэширование должно быть разное.
Существуют ещё другие варианты hand-made кэшей на основе materialized views, temporary tables, но от них идёт бОльшая нагрузка на систему ввода-вывода, поэтому они не рассматриваются в данной презентации. Они более склонны для других баз данных. Варианты с oracle client cache и scalar subquery caching я тоже не рассказываю, так как по ним мало эффектных кейсов, но после доклада готов рассказать.
Рассмотрим как часто решают задау кэширования в MsSQL для получения списка сопутствующих товаров.
Таблицу GetRelatedItems пересчитывают, например, периодическим заданием или перед началом работы с соответствующим куском функционала. Если в ней данных нет, обращаются к сложному view.
В целом, подход относительно похож, но работает не в памяти БД как в части получения данных, так и первичного заполнения, засчёт этого может быть медленнее.
В общем, самодельные result cache активно используются, но иным подходом к реализации данной задачи является in-database result cache. Его и как не получилось quick win мы рассмотрим далее...
Теперь рассмотрим второй кейс. Это система полуавтоматизированной обработки финансовой документации. Архитектура системы стандартна для Enterprise. Клиент, балансировщик, 2 сервера приложений для расчёта бизнес-логики, база данных Oracle и её резервный экземляр.
Одной из множества задач является задача коррекции документов после их автоматического распознавания.
Упрощённо говоря она выглядит так
Есть документы, для каждого нераспознанного автоматически системой показателя предлагается набор показателей либо из предыдущих документов клиента, либо из похожей индустрии, либо по похожей доходности, при этом ещё сравнивает с распознанным значением, чтобы не предложить лишнее. Что важно документы многоязычные.
Пользователь выбирает нужное значение и повторяет для каждой пустой строчки.
Важно отметить, что показатели повторяются как в строках, так и столбцах.
В целом задача по
1 неделя до релиза
На обработку документа уходит минимум 5 минут
Java код менять нельзя
В команду разработки баз данных приходят с просьбой о помощи
Принимаем решение использовать базу данных и Oracle Result cache так как
Возможности по оптимизации исчерпаны
Параметры активно повторяются
Данные рекомендаций редко обновляются, так как используют полнотекстовый индекс
Что такое result cache. Это технология от Oracle по кэшИированию результатов с минимальным влиянием на приложение.
Как же всё это выглядит. По-факту он включается указанием инструкции result_cache. На втором выполнении видно, что никаких операций с базой данных не происходит. Всё получается из кэша. Как мы видим изменения минимальны.
Есть второй способ, а именно аннотации. Они включают result_cache для таблицы если она участвует в запросе.
Если хоть одна из таблиц без аннотации, то result_cache исчезает.
Если все с ним, то весь запрос кэшируется.
Для SQL зависимости определяются через план запроса, что весьма забавно, так как Oracle может преобразовывать запрос выкидывая ненужные таблицы из запроса и их не окажется в списке зависимостей. Например, в запросе на слайде применена трансформация join elimination из-за того, что есть FK и таблицы нет в зависимостях.
Убираем constraint и Oracle пересчитывает дерево зависимостей. Для plsql кода зависимости определяются в run-time. Это позволяет делать даже dependency tracking для динамического sql и сложной условной логики.
Оракл позволяет кэшировать результаты не только всего запроса целиком, но и его части. Это либо inline view как в виде with.
Так и в виде from.
Более того, можно создать кэшированное view. Например, в джоине таблица прочиталась как обычно, а вью было взято из кэша результатов.
Итак, посмотрим когда же Оракл инвалидирует result cache.
Видно, что если в рамках своей сессии произошли изменения, то кэш игнорируется именно в этой сессии. Другие сессии продолжают использовать сохранённый результат. Как только происходит операция commit и другие сессии ожидают нового результата.
Итак, посмотрим когда же Оракл инвалидирует result cache.
Видно, что если в рамках своей сессии произошли изменения, то кэш игнорируется именно в этой сессии. Другие сессии продолжают использовать сохранённый результат. Как только происходит операция commit и другие сессии ожидают нового результата.
Как только мы подтверждаем наши изменения они становятся неактуальными для других сессий
К сожалению, не все так гладко. Oracle производит инвалидации и в ряде неочевидных случаев.
1. При любом вызове select for update
2. Если в вашей таблице есть неиндексированый внешний ключ и в его родительской таблице произошло изменение данных. Это произойдёт даже если родительская таблица не упомянута в запросе.
3. Неудачный апдейт по основной таблице и удачный по другойstatement
Выглядит, что на самом деле обновление учитывает факт наличия блокироки, факт попытки апдейта и количество задействованных строк отличное от нуля, причём всё равно в какой из таблиц.
Итак, мы изучили всё вышеуказанное и решили идти в прод
Придя утром на работе мы обнаружили письмо примерно следующего содержания. Почему зависают сессии? Каким образом 30 секунд превратились в 20 минут?
В общем, мы начали разбираться.
Мы увидели 40 пользователей, делающих распознавание и даже упустим тот факт, что их не 10, как мы ожидали. Гораздо более странно, что было видно, что в базе данных количество сессий всегда было ровно в 3 раза больше.
Проведя внутреннее расследование мы выяснили, что java разработчики делают распознавание в 3 потока.
И это было ещё не на пике. Это была наша первая ошибка. Мы неодоценили нагрузку. Но в целом всё равно такого проседания быть было не должно. Почему резалт кэш не любит частое к нему обращение расскажу позже.
Для поиска проблем в result_cache нам понадобится небольшой набор view и хранимых процедур.
Самая важная для нас процедура – процедура получения информации о памяти
Что важное я хочу отметить, что в документации про данные случаи не указано. Это видно только из support notes. Таким образом, всё нюансы result cache ТОЛЬКО на саппорте.
Мы решили понять, сколько же у нас закэшировано объектов. Для этого мы воспользовались представлением v$result_cache_objects. Записей было явно много больше чем мы ожидали.
Также мы решили посмотреть, что же это за объекты. Мы были сильно удивлены, но это были не наши запросы. Причем по характеру видно, что это явно ETL-процесс.
И мы вспомнили, что сами включили для этих таблиц аннотации, так как из них довольно часто приложению требовались данные и там кэширование было уместно. Однако запросы к таблицам с интервалом 1 минуты на поиск изменённых данных наводнили кэш приведя к вымыванию наших запросов. Анотации мы отключать не стали, но отключили принудительно кэширование в etl.
Мы почистили объекты, но скоро их количество вернулось к 120000. Мы продолжили изучать, что же ещё кэшируется, так как скорость не менялась.
Мы обнаружили следующие запросы. Это были запросы от Adaptive Statistics, которую Oracle применяет для построения планов.
На форуме поддержки мы довольно оперативно нашли баг про этот функционал и result cache. Отключили использование result cache и производительность улучшилась до 10 минут за документ.
Что же такое за latch free, которые возникают при result cache?
Итак, что такое latch и к чему они приводят
Так как нам неоходимо обеспечить согласованность по чтению, то необходимо использовать блокировки. Result cache это единственное место, где читатели могу заблокировать читателей.
Оракл пытается установить защёлку несколько раз и потом засыпает
Рассказ про latch
Рассказ про latch
Итак, мы получили отчёт об использовании памяти.
Подскажите, по каким показателям можно понять, что что-то пошло не так?
Таким образом, стало понятно, что памяти не хватает из-за некорректного расчёта её объёма.
Таким образом, стало понятно, что памяти не хватает из-за некорректного расчёта её объёма.
Мы использовали следующую формулу: ширина строки результатов * ожидаемое кол-во результатов, но не учли, что выделение памяти происходит блоками размером минимум 1 Кб.
Что и привело к известным ошибкам в процессе переполнения кэша.
Как же надо рассчитывать память? А считать её надо в блоках.
Итак, неужели нам нужны все эти параметры? Конечно же нет!
Как уже упоминалось досточно четырёх. В кейсе мы обошлись одним – общим размером памяти.
Хотя мы и достигли улучшения с 20 минут до 6 время продолжало быть неприемлемым. Посмотрим, что нам ещё может дать отчёт об использовании кэша.
Видите ли вы что в отчёте ещё странного?
Путём неких изысканий мы обнаружили, что отключен джоб обновления рекомендаций, что на самом деле приводило их обновлению данных сразу же .
Мы запустили джоб с часовым интервалом как и было задумано, что автоматически отключило функционал постоянного обновления таблицы.
Итак, количество инвалидаций уменьшилось до минимального, удаление корректных записей исчезло, а производительность вернулась в ожидаемые границы даже при пятикратном увеличении нагрузки.
Мы не захотели, чтобы данная ситуация повторилась в дальнейшем и изучая как же Oracle использует result cache в ядре, обнаружили, что для ряда вещей используется недокументированный параметр shelflive по истечению которого результат запроса самоудаляется из кэша. Этот параметр был встроен в новую версию приложения. Важно, что он так же удаляется, если было изменение данных.
Если факт изменения не критичен для кэша, можно воспользоваться опцией SNAPSHOT – тогда изменения данных не будут инвалидировать кэш.
Если вы не испугались после наших кейсов result cache то есть ещё ряд неозвученных ограничений, ряд из которых очевиден.
Нет возможности кэширования объектов в схеме SYS
Нельзя кэшировать временные и внешние таблицы. Важно, что по-факту можно и оракл это явно не ограничивает. Это приводит к тому, что можно увидеть то, что раньше было немыслимо, а именно, содержимое временных таблиц других пользователей. Более того, оракл декларирует, что это исправлено, но в 12.2 до сих данная проблема есть.
Нелязя использовать недетерменированные sql и pl/sql функции
Конвеерные функции
Входные и выходные параметры должны быть простых типов данных.
На самом деле есть подходы как обходить ограничения с current_date. Могу показать скрипты после докладка.
Result cache широко используется ядром оракла. Для поиска таких мест можно использовать запрос к шаред пулу.
Очень активно кэш используется при работе с джобами, средой разработки приложений APEX. Обратите внимание на недокументированную опцию sysobj – я её нашёл именно тут.
Также через резалт кэш сохраняется информация по адаптивной статистике и dynamic sampling – механизмам корректной генерации статистики и трансформации планов. Что важно данные механизмы используют опцию snapshot, которую я именно тут и обнаружил.
Кратко подитожим работу result cache:
Данные при запросе попадают с уровня хранения в буферный кэш
Данные из буферного кэша попадают в область памяти result cache
Результаты переиспользуются с использование блокировок
Исходя из услышанного сведём достоинства и недостатки.
Рассмотрим сначала плюсы.
Можно не менять код приложения вообще или свести изменения к минимуму
Не требуется использовать внутренние языки программирования
Целостность данных при многопользовательском доступе через автоматическую инвалидацию
Может быть очень быстрым
Минусы мы уже увидели:
Кэш должен быть правильно использован. При неправильном сценарии может привести к иллюзии роста скорости, а потом её падению
База данных должна быть правильно настроена
Решение очень проприетарное (хотя я не верю в миф независимости приложения от базы данных)
Хочется отметить, что даже системы, разрабатываемые Oracle наткнулись на проблемы с некорректным использованием result cache.
Например, даже erp система Oracle E-Business suite склонна к падениям по причине некорректного использования result cache.
Однако нам интересны не сам факт наличия проблем, а способы их недопущения, так как сейчас мы имеем достаточно информации для их предотвращения. В процессе подготовки презентации было обнаружено письмо службы технической поддержки крупнейшей российской cloud системы рассчёта лояльности. На ней рассчитывают свои параметры известные сети по продаже косметики, торговые марки индустрии красоты, крупные ретейлеры электроники.
Итак, переходим к непосредственно письму.
Рассмотрим технические причины, которые вполне очевидно привели к сбою
Блокировки из-за неправильного размера кэша при bulk заливке, которая наверняка была в той самой подготовительной работе
Возможно это всё же v$result_cache_memory или dbms_result_cache.memory_report, так как по нему баг не закрыт. Однако, тесты багов написаны так хитро, что в них фактически явно говорится, что в v_result_cache_objects есть ошибка.
Итак, после был
изменён параметр result_cache_max_size
Скорее всего убран /*+ result_cache/ или созданы black_list или добавлен no_result_cache
Как мы видим, были предприняты практически идентичные действия как в случае с Recommendation engine.
Как же надо было провести безболезненное обновление?
Что бы надо было сделать на самом деле:
Оценить на сколько изменится итоговый размер кэша. Формула расчёта будет приведена позднее.
Уменьшить влияние заливки набора данных в result cache: разово после загрузки, а не сразу же по каждому оператору
Проверить анонсированые Oracle исправления перед накатом изменений
Как мы заметили основаная проблема кэшей на сервере это расход дорогой серверной памяти. Для решения этой проблемы есть решение Client side result cache.
Он работает так. Есть база данных и драйвер. При попытке подключения запрашивается конфигурация с БД и поднимается кэш.
...........
Остальные потоки сразу же запрашивают общий кэш драйвера тем самым экономя память и ресурсы сервера.
Иногда в зависимости от нагрузки драйвер присылает в БД статистику по использования кэша, которую потом можно будет посмотреть.
Правила инвалидации клиентского кэша такие же как и для серверного, но гораздо интереснее посмотреть как это происходит в динамике.
Есть 2 случая инвалидации. Первый – когда запросы идут часто и не наступил Invalidation lag.
В таком случае поток пойдёт в базу данных, обновит кэши и считает данные из него.
Ежели никаких запрос в период от прихода сообщения до Invalidation lag не было, то сам драйвер через Invalidation lag запросит список инвалидированных резалтсетов.
Таким образом кэш обеспечивает самоподдерживаемость.
Итак, посмотрим как надо сконфигурировать БД, чтобы заработал client side result cache.
Всё весьма просто.
Есть 2 параметра, которые мы уже упомянали.
Посмотрим на примеры кода с использованием клиентского кэша.
Вот пример кода на .net.
Как мы видим в коде нет ничего такого, что включает клиентский кэш. Однажды активировав его на сервере мы на клиенте указываем уже известный хинт result_cache
java
Собственно после того как выполнено java-приложении можно посмотреть как оно использовало клиентский result cache.
Это табличка, при отключении сессии записи удаляются
Тут указан запрос для текущей сессии, но в целом надо искать по sid из session_connect_info. Почему Oracle не вынес это прямо в данную таблицу (а это таблица, а не view) я понять не смог.
Именно поэтому я считаю, что этот функционал не очень востребован, хотя как мне кажется очень нужен.
Достоинства, как всегда следуют из архитектуры.
Дешёвая память Любые драйвера Минимальное изменение кода приложения Сильное уменьшение нагрузки на базу данных
Отсутствие необходимости использовать дополнительные программные продукты для кэширования
Минусы понятны
Согласованность по чтению с задержкой
Необходимость толстого клиента, решение от вендора, максимум 2 Гб на клиента и как-то подозрительно мало багов на саппорте (я нашел около пяти), что говорит о малом использовании в production. Или бы иначе никто не стал пользоваться кэширующим сервером Oracle Oracle coherence.
Исходя из всех кейсов мы можем окончательно сформулировать плохие и хорошие сценарии для использования всех видов кэшей
Первый случай – если после изменения данных кэш должен мгновенно стать неактуальным. Для самодельных кэшей тяжело создать корректную инвалидацию в случае изменения объектов, на которых они построены.
Если использование хранимой в БД логики запрещено политиками разработки
Если кэш будет наполнен множеством разных значений, то они не будут переиспользоваться. Например, кэш созданный по идентификаторам транзакций бесполезен по причине того, что транзакции не так часто ищутся.
Все аналогичные запросы подвисают на время данного таймаута ожидая выполнения главного запроса
Одновременный многопользовательский доступ провоцирует возникновение блокировок
Первый случай – если после изменения данных кэш должен мгновенно стать неактуальным. Для самодельных кэшей тяжело создать корректную инвалидацию в случае изменения объектов, на которых они построены.
Если использование хранимой в БД логики запрещено политиками разработки
Есть команда разработки средней квалификации
Уже используется много SQL без использования внешнего кэширующего слоя
Есть ограничения по ресурсам сервера СУБД
Оцените верно размер памяти с учётом количества запросов, а не количества результатов.
Не бойтесь использовать auto-expiring. Он сохранит место удаляя неиспользуемое.
Не перегружайте запросами во время загрузки больших объемов данных
Прогревайте кэш
Убедитесь, что _result_cache_timeout соответствует вашим ожиданиям
НИКОГДА не используйте FORCE для БД
Проверяйте, адекватно ли используется FORCE для таблиц
Проверьте find count и убедитесь надо ли вам использовать result cache для адаптивной статистики
I would like to tell thank you for you time and questions once again. Good luck in your projects.