This document summarizes a presentation on tuning Oracle GoldenGate for optimal performance in real-world environments. It discusses architectural changes in GoldenGate 12c including a microservices architecture and parallel replication. It also outlines several areas and tools for tuning performance at the host, database, and GoldenGate configuration levels including the use of AWR, STATS commands, and health check scripts.
Oracle GoldenGate 18c - REST API ExamplesBobby Curtis
Â
This document provides examples of using RESTful APIs with Oracle GoldenGate 18c. It includes examples of creating, deleting, and listing extracts, replicats, credentials, and distribution paths using cURL commands. It also provides examples of using RESTful APIs within shell scripts to automate administration tasks like adding extracts and replicats.
This document discusses installing and running Oracle GoldenGate 19c on Docker. It provides an overview of Oracle GoldenGate, describes how to build an Oracle GoldenGate image on Docker, and outlines two options for launching an Oracle GoldenGate container - using a provided script or a manual build. Key steps include downloading the Oracle GoldenGate installation files, building the Docker image using the files, and running the GoldenGate container to enable data replication functionality on Docker.
This document provides an overview of Oracle GoldenGate 12c, a heterogeneous replication tool. It describes GoldenGate's key features like real-time data integration and query offloading. The document outlines GoldenGate's topologies, architecture, supported databases, and data types. It compares GoldenGate to Oracle Streams and details new features in 12c like optimized capture methods and improved high availability. Basic concepts are explained, such as classic and integrated capture, downstream and bi-directional replication. Restrictions on data types and database features are also noted.
Oracle REST Data Services: Options for your Web ServicesJeff Smith
Â
ORDS has many options when it comes to delivering web services for your Oracle Database. We have an Automatic feature for your database objects where we handle everything for you. Or, you can write your own services with your SQL & PL/SQL. This slide deck shows exactly what you have to choose from for your applications.
The document discusses tuning Oracle GoldenGate performance, including available tools for monitoring replication lag and throughput. It presents a case study examining lag times of over 1 hour 30 minutes for a replication configuration and uses tools like the Streams Performance Advisor and lag reports to identify potential bottlenecks. Recommendations are provided for configuration changes and monitoring to improve replication performance.
This document summarizes a presentation on tuning Oracle GoldenGate for optimal performance in real-world environments. It discusses architectural changes in GoldenGate 12c including a microservices architecture and parallel replication. It also outlines several areas and tools for tuning performance at the host, database, and GoldenGate configuration levels including the use of AWR, STATS commands, and health check scripts.
Oracle GoldenGate 18c - REST API ExamplesBobby Curtis
Â
This document provides examples of using RESTful APIs with Oracle GoldenGate 18c. It includes examples of creating, deleting, and listing extracts, replicats, credentials, and distribution paths using cURL commands. It also provides examples of using RESTful APIs within shell scripts to automate administration tasks like adding extracts and replicats.
This document discusses installing and running Oracle GoldenGate 19c on Docker. It provides an overview of Oracle GoldenGate, describes how to build an Oracle GoldenGate image on Docker, and outlines two options for launching an Oracle GoldenGate container - using a provided script or a manual build. Key steps include downloading the Oracle GoldenGate installation files, building the Docker image using the files, and running the GoldenGate container to enable data replication functionality on Docker.
This document provides an overview of Oracle GoldenGate 12c, a heterogeneous replication tool. It describes GoldenGate's key features like real-time data integration and query offloading. The document outlines GoldenGate's topologies, architecture, supported databases, and data types. It compares GoldenGate to Oracle Streams and details new features in 12c like optimized capture methods and improved high availability. Basic concepts are explained, such as classic and integrated capture, downstream and bi-directional replication. Restrictions on data types and database features are also noted.
Oracle REST Data Services: Options for your Web ServicesJeff Smith
Â
ORDS has many options when it comes to delivering web services for your Oracle Database. We have an Automatic feature for your database objects where we handle everything for you. Or, you can write your own services with your SQL & PL/SQL. This slide deck shows exactly what you have to choose from for your applications.
The document discusses tuning Oracle GoldenGate performance, including available tools for monitoring replication lag and throughput. It presents a case study examining lag times of over 1 hour 30 minutes for a replication configuration and uses tools like the Streams Performance Advisor and lag reports to identify potential bottlenecks. Recommendations are provided for configuration changes and monitoring to improve replication performance.
Hit Refresh with Oracle GoldenGate MicroservicesBobby Curtis
Â
The document discusses Oracle GoldenGate Microservices and its objectives for version 12.3. It aims to improve usability, manageability, and performance. The key changes include a microservices architecture with REST APIs and services for flexible administration, a security framework with role-based access, and cross-platform configuration between classic and microservices architectures. The new interfaces include HTML5 pages, a thin AdminClient, and dynamic REST endpoints for improved usability.
Moving Gigantic Files Into and Out of the Alfresco RepositoryJeff Potts
Â
This talk is a technical case study showing show Metaversant solved a problem for one of their clients, Noble Research Institute. Researchers at Noble deal with very large files which are often difficult to move into and out of the Alfresco repository.
All of the Performance Tuning Features in Oracle SQL DeveloperJeff Smith
Â
An overview of all of the performance tuning instrumentation, tools, and features in Oracle SQL Developer. Get help making those applications and their queries more performant.
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...Aaron Shilo
Â
The document provides an overview of Oracle database performance tuning best practices for DBAs and developers. It discusses the connection between SQL tuning and instance tuning, and how tuning both the database and SQL statements is important. It also covers the connection between the database and operating system, how features like data integrity and zero downtime updates are important. The presentation agenda includes topics like identifying bottlenecks, benchmarking, optimization techniques, the cost-based optimizer, indexes, and more.
TFA Collector - what can one do with it Sandesh Rao
Â
The document provides an overview of the Oracle Trace File Analyzer (TFA) features and capabilities. TFA is installed as part of Oracle Grid Infrastructure and Oracle Database installations and provides a single interface to collect diagnostic data across clusters and consolidate it in one place. It reduces the time required to obtain diagnostic data needed to diagnose problems, saving businesses money. TFA can automatically detect events, collect relevant diagnostics, notify administrators, and upload collections to Oracle Support.
Performance Stability, Tips and Tricks and UnderscoresJitendra Singh
Â
This document provides an overview of upgrading to Oracle Database 19c and ensuring performance stability after the upgrade. It discusses gathering statistics before the upgrade to speed up the process, using AutoUpgrade for upgrades, and various testing tools like AWR Diff Reports and SQL Performance Analyzer to check for performance regressions after the upgrade. Maintaining good statistics and thoroughly testing upgrades are emphasized as best practices for a successful upgrade.
The document describes how to convert a single instance Oracle database to Oracle Real Application Clusters (RAC) using RMAN. The key steps include:
1. Duplicating the single instance database to an auxiliary instance on the RAC nodes using RMAN DUPLICATE.
2. Configuring the RAC-specific initialization parameters and creating the necessary redo logs and undo tablespaces.
3. Starting instances on each RAC node and registering them with the Cluster Ready Services framework.
Any DBA from beginner to advanced level, who wants to fill in some gaps in his/her knowledge about Performance Tuning on an Oracle Database, will benefit from this workshop.
Oracle Database performance tuning using oratopSandesh Rao
Â
Oratop is a text-based user interface tool for monitoring basic database operations in real-time. This presentation will go into depth on how to use the tool and some example scenarios. It can be used for both RAC and single-instance databases and in combination with top to get a more holistic view of system performance and identify any bottlenecks.
This document discusses execution plans in Oracle Database. It begins by explaining what an execution plan is and how it shows the steps needed to execute a SQL statement. It then covers how to generate an execution plan using EXPLAIN PLAN or querying V$SQL_PLAN. The document discusses what the optimizer considers a "good" plan in terms of cost and performance. It also explores key elements of an execution plan like cardinality, access paths, join methods, and join order.
This document summarizes a presentation on Oracle RAC (Real Application Clusters) internals with a focus on Cache Fusion. The presentation covers:
1. An overview of Cache Fusion and how it allows data to be shared across instances to enable scalability.
2. Dynamic re-mastering which adjusts where data is mastered based on access patterns to reduce messaging.
3. Techniques for handling contention including partitioning, connection pools, and separating redo logs.
4. Benefits of combining Oracle Multitenant and RAC such as aligning PDBs to instances.
5. How Oracle In-Memory Column Store fully integrates with RAC including fault tolerance features.
Tanel Poder Oracle Scripts and Tools (2010)Tanel Poder
Â
Tanel Poder's Oracle Performance and Troubleshooting Scripts & Tools presentation initially presented at Hotsos Symposium Training Day back in year 2010
Enable GoldenGate Monitoring with OEM 12c/JAgentBobby Curtis
Â
Oracle GoldenGate monitoring can be performed using Oracle Enterprise Manager 12c. The Oracle GoldenGate plug-in for Oracle Enterprise Manager 12c provides monitoring capabilities such as viewing process status and lag times, accessing log files, starting and stopping processes, and viewing statistics. In addition to the plug-in, credentials must be configured and the GoldenGate processes must be running in order to monitor GoldenGate using Oracle Enterprise Manager 12c.
Troubleshooting Complex Performance issues - Oracle SEG$ contentionTanel Poder
Â
From Tanel Poder's Troubleshooting Complex Performance Issues series - an example of Oracle SEG$ internal segment contention due to some direct path insert activity.
Cost-based query optimization in Apache HiveJulian Hyde
Â
Tez is making Hive faster, and now cost-based optimization (CBO) is making it smarter. A new initiative in Hive 0.13 introduces cost-based optimization for the first time, based on the Optiq framework.
Optiqâs lead developer Julian Hyde shows the improvements that CBO is bringing to Hive 0.13. For those interested in Hive internals, he gives an overview of the Optiq framework and shows some of the improvements that are coming to future versions of Hive.
Automating Your Clone in E-Business Suite R12.2Michael Brown
Â
It is possible to automate the cloning process in Oracle E-Business Suite 12.2. This presentation discusses how to accomplish that and gives some warnings about when it is not possible to run a clone.
For OAUG members, the slides and a recording of the presentation are available on www.oaug.org.
Oracle Data Integrator (ODI) is an ETL tool acquired by Oracle in 2006. It provides a graphical interface to build, manage, and maintain data integration processes. ODI can extract, transform, and load data between heterogeneous data sources to support business intelligence, data warehousing, data migrations, and master data management projects. It uses a 4-tier architecture with repositories to store metadata and designs, an ODI Studio for development, runtime agents to execute tasks, and a console for monitoring.
Presented at the Dallas Oracle Users Group
By Nabil Nawaz
sponsored by BIAS Corporation
Oracle DataPump is an excellent tool for cloning databases and schemas and it is widely used as a common toolset today among DBAs and Developers to transfer data and structure between databases. Please come and learn about new Data pump features for Oracle version 12.2. We will also be sharing a case study for a large multi-terabyte database for optimizing a data pump import process that originally ran for more than a day and then the process was tuned to run in just about 4-6 hours a nearly 90% performance enhancement. The tips that will be shared will be of great value and help to ensure you are able to have a well-tuned import process with DataPump.
Maximum Availability Architecture - Best Practices for Oracle Database 19cGlen Hawkins
Â
Provides the latest updates on high availability (HA) best practices in this well-established technical deep-dive session. Learn how to optimize all aspects of Oracle Active Data Guard 19c. See how to use session draining, transparent application continuity, Oracle RAC, and Oracle GoldenGate to mask outages and planned maintenance from users and to accelerate time to repair for single database or your fleet of databases. Hear about the latest HA best practices with Oracle Multitenant and understand how the new sharded architecture can achieve even higher levels of HA and fault isolation for OLTP applications. Find out how everything you know about Oracle Maximum Availability Architecture (MAA) on-premises can be deployed in the cloud.
Extreme replication at IOUG Collaborate 15Bobby Curtis
Â
This document summarizes a session on tuning Oracle GoldenGate performance between an Oracle source and target database. It discusses tools for monitoring GoldenGate performance such as lag reports, process statistics, and database views. It also provides a case study example configuration and recommendations for tuning integrated extract and replicat parameters such as parallelism settings.
This document provides an overview and agenda for a presentation on tuning Oracle GoldenGate performance. It discusses measuring baseline GoldenGate performance metrics like lag times and checkpoints. It also covers tuning GoldenGate configurations like using multiple process groups. The document recommends tuning the operating system by monitoring CPU, memory, and disk I/O performance and addressing any bottlenecks found. The goal of these tuning efforts is to reduce lag times and optimize GoldenGate throughput.
Hit Refresh with Oracle GoldenGate MicroservicesBobby Curtis
Â
The document discusses Oracle GoldenGate Microservices and its objectives for version 12.3. It aims to improve usability, manageability, and performance. The key changes include a microservices architecture with REST APIs and services for flexible administration, a security framework with role-based access, and cross-platform configuration between classic and microservices architectures. The new interfaces include HTML5 pages, a thin AdminClient, and dynamic REST endpoints for improved usability.
Moving Gigantic Files Into and Out of the Alfresco RepositoryJeff Potts
Â
This talk is a technical case study showing show Metaversant solved a problem for one of their clients, Noble Research Institute. Researchers at Noble deal with very large files which are often difficult to move into and out of the Alfresco repository.
All of the Performance Tuning Features in Oracle SQL DeveloperJeff Smith
Â
An overview of all of the performance tuning instrumentation, tools, and features in Oracle SQL Developer. Get help making those applications and their queries more performant.
Exploring Oracle Database Performance Tuning Best Practices for DBAs and Deve...Aaron Shilo
Â
The document provides an overview of Oracle database performance tuning best practices for DBAs and developers. It discusses the connection between SQL tuning and instance tuning, and how tuning both the database and SQL statements is important. It also covers the connection between the database and operating system, how features like data integrity and zero downtime updates are important. The presentation agenda includes topics like identifying bottlenecks, benchmarking, optimization techniques, the cost-based optimizer, indexes, and more.
TFA Collector - what can one do with it Sandesh Rao
Â
The document provides an overview of the Oracle Trace File Analyzer (TFA) features and capabilities. TFA is installed as part of Oracle Grid Infrastructure and Oracle Database installations and provides a single interface to collect diagnostic data across clusters and consolidate it in one place. It reduces the time required to obtain diagnostic data needed to diagnose problems, saving businesses money. TFA can automatically detect events, collect relevant diagnostics, notify administrators, and upload collections to Oracle Support.
Performance Stability, Tips and Tricks and UnderscoresJitendra Singh
Â
This document provides an overview of upgrading to Oracle Database 19c and ensuring performance stability after the upgrade. It discusses gathering statistics before the upgrade to speed up the process, using AutoUpgrade for upgrades, and various testing tools like AWR Diff Reports and SQL Performance Analyzer to check for performance regressions after the upgrade. Maintaining good statistics and thoroughly testing upgrades are emphasized as best practices for a successful upgrade.
The document describes how to convert a single instance Oracle database to Oracle Real Application Clusters (RAC) using RMAN. The key steps include:
1. Duplicating the single instance database to an auxiliary instance on the RAC nodes using RMAN DUPLICATE.
2. Configuring the RAC-specific initialization parameters and creating the necessary redo logs and undo tablespaces.
3. Starting instances on each RAC node and registering them with the Cluster Ready Services framework.
Any DBA from beginner to advanced level, who wants to fill in some gaps in his/her knowledge about Performance Tuning on an Oracle Database, will benefit from this workshop.
Oracle Database performance tuning using oratopSandesh Rao
Â
Oratop is a text-based user interface tool for monitoring basic database operations in real-time. This presentation will go into depth on how to use the tool and some example scenarios. It can be used for both RAC and single-instance databases and in combination with top to get a more holistic view of system performance and identify any bottlenecks.
This document discusses execution plans in Oracle Database. It begins by explaining what an execution plan is and how it shows the steps needed to execute a SQL statement. It then covers how to generate an execution plan using EXPLAIN PLAN or querying V$SQL_PLAN. The document discusses what the optimizer considers a "good" plan in terms of cost and performance. It also explores key elements of an execution plan like cardinality, access paths, join methods, and join order.
This document summarizes a presentation on Oracle RAC (Real Application Clusters) internals with a focus on Cache Fusion. The presentation covers:
1. An overview of Cache Fusion and how it allows data to be shared across instances to enable scalability.
2. Dynamic re-mastering which adjusts where data is mastered based on access patterns to reduce messaging.
3. Techniques for handling contention including partitioning, connection pools, and separating redo logs.
4. Benefits of combining Oracle Multitenant and RAC such as aligning PDBs to instances.
5. How Oracle In-Memory Column Store fully integrates with RAC including fault tolerance features.
Tanel Poder Oracle Scripts and Tools (2010)Tanel Poder
Â
Tanel Poder's Oracle Performance and Troubleshooting Scripts & Tools presentation initially presented at Hotsos Symposium Training Day back in year 2010
Enable GoldenGate Monitoring with OEM 12c/JAgentBobby Curtis
Â
Oracle GoldenGate monitoring can be performed using Oracle Enterprise Manager 12c. The Oracle GoldenGate plug-in for Oracle Enterprise Manager 12c provides monitoring capabilities such as viewing process status and lag times, accessing log files, starting and stopping processes, and viewing statistics. In addition to the plug-in, credentials must be configured and the GoldenGate processes must be running in order to monitor GoldenGate using Oracle Enterprise Manager 12c.
Troubleshooting Complex Performance issues - Oracle SEG$ contentionTanel Poder
Â
From Tanel Poder's Troubleshooting Complex Performance Issues series - an example of Oracle SEG$ internal segment contention due to some direct path insert activity.
Cost-based query optimization in Apache HiveJulian Hyde
Â
Tez is making Hive faster, and now cost-based optimization (CBO) is making it smarter. A new initiative in Hive 0.13 introduces cost-based optimization for the first time, based on the Optiq framework.
Optiqâs lead developer Julian Hyde shows the improvements that CBO is bringing to Hive 0.13. For those interested in Hive internals, he gives an overview of the Optiq framework and shows some of the improvements that are coming to future versions of Hive.
Automating Your Clone in E-Business Suite R12.2Michael Brown
Â
It is possible to automate the cloning process in Oracle E-Business Suite 12.2. This presentation discusses how to accomplish that and gives some warnings about when it is not possible to run a clone.
For OAUG members, the slides and a recording of the presentation are available on www.oaug.org.
Oracle Data Integrator (ODI) is an ETL tool acquired by Oracle in 2006. It provides a graphical interface to build, manage, and maintain data integration processes. ODI can extract, transform, and load data between heterogeneous data sources to support business intelligence, data warehousing, data migrations, and master data management projects. It uses a 4-tier architecture with repositories to store metadata and designs, an ODI Studio for development, runtime agents to execute tasks, and a console for monitoring.
Presented at the Dallas Oracle Users Group
By Nabil Nawaz
sponsored by BIAS Corporation
Oracle DataPump is an excellent tool for cloning databases and schemas and it is widely used as a common toolset today among DBAs and Developers to transfer data and structure between databases. Please come and learn about new Data pump features for Oracle version 12.2. We will also be sharing a case study for a large multi-terabyte database for optimizing a data pump import process that originally ran for more than a day and then the process was tuned to run in just about 4-6 hours a nearly 90% performance enhancement. The tips that will be shared will be of great value and help to ensure you are able to have a well-tuned import process with DataPump.
Maximum Availability Architecture - Best Practices for Oracle Database 19cGlen Hawkins
Â
Provides the latest updates on high availability (HA) best practices in this well-established technical deep-dive session. Learn how to optimize all aspects of Oracle Active Data Guard 19c. See how to use session draining, transparent application continuity, Oracle RAC, and Oracle GoldenGate to mask outages and planned maintenance from users and to accelerate time to repair for single database or your fleet of databases. Hear about the latest HA best practices with Oracle Multitenant and understand how the new sharded architecture can achieve even higher levels of HA and fault isolation for OLTP applications. Find out how everything you know about Oracle Maximum Availability Architecture (MAA) on-premises can be deployed in the cloud.
Extreme replication at IOUG Collaborate 15Bobby Curtis
Â
This document summarizes a session on tuning Oracle GoldenGate performance between an Oracle source and target database. It discusses tools for monitoring GoldenGate performance such as lag reports, process statistics, and database views. It also provides a case study example configuration and recommendations for tuning integrated extract and replicat parameters such as parallelism settings.
This document provides an overview and agenda for a presentation on tuning Oracle GoldenGate performance. It discusses measuring baseline GoldenGate performance metrics like lag times and checkpoints. It also covers tuning GoldenGate configurations like using multiple process groups. The document recommends tuning the operating system by monitoring CPU, memory, and disk I/O performance and addressing any bottlenecks found. The goal of these tuning efforts is to reduce lag times and optimize GoldenGate throughput.
Oracle GoldenGate provides real-time data integration and replication capabilities. It uses non-intrusive change data capture to replicate transactional changes in real-time across heterogeneous database environments with sub-second latency. GoldenGate has over 500 customers across various industries and supports workloads involving terabytes of data movement per day. It extends Oracle's data integration and high availability capabilities beyond Oracle databases to other platforms like SQL Server and MySQL.
This document provides an overview of Oracle GoldenGate and discusses performance tuning considerations. It begins with an introduction to GoldenGate's architecture and use cases. It then discusses the importance of baselining a GoldenGate implementation to understand existing performance. The document outlines how to gather baseline metrics on GoldenGate lag times, checkpoint information, and operating system CPU, memory, and disk I/O. It also provides GoldenGate tuning recommendations, such as using multiple process groups and parallel replication groups. The goal of performance tuning is to reduce lag times and optimize resource utilization.
This document discusses tuning Oracle GoldenGate for optimal performance. It begins with an overview of GoldenGate architecture and use cases, then discusses the importance of baseline monitoring. Key metrics to monitor are identified as lag times, checkpoint information, CPU usage, memory usage, and disk I/O. The document provides examples of commands to gather baseline data on these metrics. It then discusses configuring GoldenGate for parallel processing using multiple process groups to optimize performance. Overall it provides guidance on setting baselines and configuring GoldenGate to minimize lag times and resource utilization.
AMIS organiseerde op maandagavond 15 juli het seminar âOracle database 12c revealedâ. Deze avond bood AMIS Oracle professionals de eerste mogelijkheid om de vernieuwingen in Oracle database 12c in actie te zien! De AMIS specialisten die meer dan een jaar bèta testen hebben uitgevoerd lieten zien wat er nieuw is en hoe we dat de komende jaren gaan inzetten!
Deze presentatie is deze avond gegeven als een plenaire sessie!
This document provides an overview of a presentation on tuning Oracle GoldenGate for performance. It includes details on the speaker and their company. The presentation covers GoldenGate concepts, tools for monitoring performance, a case study, and recommendations. Performance issues were identified in the case study including replication lag of over 1 hour. The recommendations provide tuning tips for Integrated Extract and Replicat parameters.
An AMIS Overview of Oracle database 12c (12.1)Marco Gralike
Â
Presentation used by Lucas Jellema and Marco Gralike during the AMIS Oracle Database 12c Launch event on Monday the 15th of July 2013 (much thanks to Tom Kyte, Oracle, for being allowed to use some of his material)
M.
Data Analytics Service Company and Its Ruby UsageSATOSHI TAGOMORI
Â
This document summarizes Satoshi Tagomori's presentation on Treasure Data, a data analytics service company. It discusses Treasure Data's use of Ruby for various components of its platform including its logging (Fluentd), ETL (Embulk), scheduling (PerfectSched), and storage (PlazmaDB) technologies. The document also provides an overview of Treasure Data's architecture including how it collects, stores, processes, and visualizes customer data using open source tools integrated with services like Hadoop and Presto.
The document provides step-by-step instructions for configuring GoldenGate replication between an Oracle source and target database. It outlines setting up the GoldenGate software, configuring the source schema and database, performing an initial data load from source to target, and configuring the GoldenGate processes including the Extract, Replicat, and Manager on both the source and target systems.
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...Chester Chen
Â
Building highly efficient data lakes using Apache Hudi (Incubating)
Even with the exponential growth in data volumes, ingesting/storing/managing big data remains unstandardized & in-efficient. Data lakes are a common architectural pattern to organize big data and democratize access to the organization. In this talk, we will discuss different aspects of building honest data lake architectures, pin pointing technical challenges and areas of inefficiency. We will then re-architect the data lake using Apache Hudi (Incubating), which provides streaming primitives right on top of big data. We will show how upserts & incremental change streams provided by Hudi help optimize data ingestion and ETL processing. Further, Apache Hudi manages growth, sizes files of the resulting data lake using purely open-source file formats, also providing for optimized query performance & file system listing. We will also provide hands-on tools and guides for trying this out on your own data lake.
Speaker: Vinoth Chandar (Uber)
Vinoth is Technical Lead at Uber Data Infrastructure Team
Strata Singapore 2017 business use case section
"Big Telco Real-Time Network Analytics"
https://conferences.oreilly.com/strata/strata-sg/public/schedule/detail/62797
Big data challenges are common : we are all doing aggregations , machine learning , anomaly detection, OLAP ...
This presentation describe how InnerActive answer those requirements
Synapse 2018 Guarding against failure in a hundred step pipelineCalvin French-Owen
Â
Control store (ctlstore) is a new infrastructure component that solves the "n+1 problem" of independent database failures bringing down a distributed system. It replicates control data from a system of record to local SQLite databases on each service node. Ctlstore loads data transactionally from the system of record using a loader, executive, and control data log. It then replicates changes to local databases using a reflector. Snapshots to object storage allow new instances to start up with the latest data. This approach provides high availability and fast querying of shared control data across many services.
Spark Streaming allows processing live data streams using small batch sizes to provide low latency results. It provides a simple API to implement complex stream processing algorithms across hundreds of nodes. Spark SQL allows querying structured data using SQL or the Hive query language and integrates with Spark's batch and interactive processing. MLlib provides machine learning algorithms and pipelines to easily apply ML to large datasets. GraphX extends Spark with an API for graph-parallel computation on property graphs.
Replicate from Oracle to data warehouses and analyticsContinuent
Â
Analyzing transactional data residing in Oracle databases is becoming increasingly common, especially as the data sizes and complexity increase and transactional stores are no longer to keep pace with the ever-increasing storage. Although there are many techniques available for loading Oracle data, getting up-to-date data into your data warehouse store is a more difficult problem. VMware Continuent provides provides data replication from Oracle to data warehouses and analytics engines, to derive insight from big data for better business decisions. Learn practical tips on how to get your data warehouse loading projects off the ground quickly and efficiently when replicating from Oracle into Hadoop, Amazon Redshift, and HP Vertica.
Golden Gate - How to start such a project?Trivadis
Â
This document provides information about starting a GoldenGate replication project. It discusses establishing a project plan, running a proof of concept, designing a topology, defining rules and processes, and preparing documentation and scripts. It emphasizes keeping the setup simple, configuring databases correctly to avoid unnecessary overhead, implementing critical components like patching, a repository to generate scripts, heartbeat monitoring, and multiple types of monitoring. It also stresses being prepared to verify replicated data between source and destination.
Overview of data analytics service: Treasure Data ServiceSATOSHI TAGOMORI
Â
Treasure Data provides a data analytics service with the following key components:
- Data is collected from various sources using Fluentd and loaded into PlazmaDB.
- PlazmaDB is the distributed time-series database that stores metadata and data.
- Jobs like queries, imports, and optimizations are executed on Hadoop and Presto clusters using queues, workers, and a scheduler.
- The console and APIs allow users to access the service and submit jobs for processing and analyzing their data.
Similar to Oracle Goldengate training by Vipin Mishra (20)
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
Â
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Â
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
Â
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Â
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Â
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
Â
An English đŹđ§ translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech đ¨đż version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Fueling AI with Great Data with Airbyte WebinarZilliz
Â
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
Â
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This yearâs report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Â
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind fĂźr viele in der HCL-Community seit letztem Jahr ein heiĂes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und LizenzgebĂźhren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer mĂśglich. Das verstehen wir und wir mĂśchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lÜsen kÜnnen, die dazu fßhren kÜnnen, dass mehr Benutzer gezählt werden als nÜtig, und wie Sie ßberflßssige oder ungenutzte Konten identifizieren und entfernen kÜnnen, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnÜtigen Ausgaben fßhren kÜnnen, z. B. wenn ein Personendokument anstelle eines Mail-Ins fßr geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren LÜsungen. Und natßrlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Ăberblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und ĂźberflĂźssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps fßr häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Â
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Â
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Â
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
2. Training Overview
Topics
Overview of Goldengate
Goldengate Replication & component
Goldengate Director
GoldenGate veridata
GoldenGate vs Oracle Stream
GoldenGate vs Data Guard
Goldengate trail file
How Oracle GoldenGate Works
GoldenGate Checkpointing
Goldengate download and installation
prerequisites of Installation
Database Privilege for GG user
Supplemental logging
Trandata
GoldenGate Initial data Load
GoldenGate DML Replication
DDL Replication - Overview
GoldenGate parameter file
Logdump Utility
CCM and ICM
GoldenGate Upgrade
GoldenGate troubleshoot
GolddenGate performance command
BATCHSQL
GoldenGate functions
Golden Gate abend and bugs and fixes
Golden Gate failover document and DR plan
Golden Gate reporting through hearbeat
Custom Data Manipulation Requirements for the Database Replicate Process
Additional topic for discussion
3. Overview of Goldengate.
⢠Oracle GoldenGate is a heterogeneous replication solution
⢠Oracle acquired GoldenGate Software Inc ( founded in 1995 by Eric Fish and Todd Davidson ), a leading
provider of real-time data integration solutions on September 3, 2009.
⢠GoldenGate supports:
â Zero Downtime Upgrade and Migration
â System Integration / Data Synchronization
â Query and Report offloading
â Real-time Data distribution
â Real-time Data Warehousing
â Live standby database
â Active-active high availability
⢠Controversial replacement for Oracle Streams
⢠Oracle GoldenGate is Oracleâs strategic solution for real time data integration.
⢠GoldenGate software enables mission critical systems to have continuous availability and access to real-time
data.
⢠It offers a fast and robust solution for replicating transactional data between operational and analytical
systems.
⢠Today there are more than 500 customers around the world using GoldenGate technology for over 4000
solutions.
5. GoldenGate Supported Databases
Oracle GoldenGate for Non Oracle Databases
Supported non-oracle databases include:
IBM DB2 on Windows, UNIX and Linux
Microsoft SQL Server 2000, 2005, 2008
Sybase on Windows, UNIX and Linux
Teradata on Windows, UNIX and Linux
MySQL on Windows, UNIX and Linux
TimesTen on Windows and Linux (delivery only)
8. Goldengate Replication & component
GoldenGate replication consists of three basic utilities: âextractâ,âreplicatâ and
âdata pumpâ. In addition all three of these utilities access âtrail filesâ, which are
flat files that contain formatted GoldenGate data.
An extract group mines a databaseâs redo logs; and writes information about
the changes it find into trail files. Extract is functionally similar to the Streams
âcaptureâ process.
A replicat group reads information from trail files; and then writes those
changes into database tables. Replicat is functionally similar to the Streams
âapplyâ process.
A data pump group copies information between trail files. Data is functionally
similar to Streams âpropagateâ process.
14. GoldenGate Bi-Directional Replication
⢠Also known as Active-Active Replication
14
Capture
Replicat
Replicat
Capture
Data
Pump
Data
Pump
Server A Server B
Online
Redo Logs
Online
Redo Logs
Local
Trail
Remote
Trail
Remote
Trail
Local
Trail
15. GoldenGate Supported Data Types
⢠The following data types are supported for both classic and integrated
capture
â NUMBER
â BINARY FLOAT
â BINARY DOUBLE
â CHAR
â VARCHAR2
â LONG
â NCHAR
â NVARCHAR2
â RAW
â LONG RAW
â DATE
â TIMESTAMP
15
16. GoldenGate Supported Data Types
⢠There is limited support in classic capture for the following data types:
â INTERVAL DAY
â INTERVAL YEAR
â TIMESTAMP WITH TIME ZONE
â TIMESTAMP WITH LOCAL TIME ZONE
⢠The following data types are not supported
â Abstract data types with scalar, LOBs, VARRAYs, nested tables , REFS
â ANYDATA
â ANYDATASET
â ANYTYPE
â BFILE
â MLSLABEL
â ORDDICOM
â TIMEZONE_ABBR
â URITYPE
â UROWID16
17. GoldenGate Restrictions
⢠Neither capture method supports
â Database replay
â EXTERNAL tables
â Materialized views with ROWID
⢠Classic capture does not support
â IOT mapping tables
â Key compressed IOTs
â XMLType tables stored as XML Object Relational
â Distributed Transactions
â XA and PDML distributed transactions
â Capture from OLTP table compressed tables
â Capture from compressed tablespaces
â Exadata Hybrid Columnar Compression (EHCC)17
18. GoldenGate Oracle-Reserved Schemas
The following schema names are reserved by Oracle and should not be configured
for GoldenGate replication:
18
$AURORA EXFSYS REPADMIN
$JIS MDSYS SYS
$ORB ODM SYSMAN
$UNAUTHENTICATED ODM_MTR SYSTEM
$UTILITY OLAPSYS TRACESVR
ANONYMOUS ORDPLUGINS WKPROXY
AURORA ORDSYS WKSYS
CTXSYS OSE$HTTP$ADMIN WMSYS
DBSNMP OUTLN XDB
DMSYS PERFSTAT
DSSYS PUBLIC
19. GoldenGate RAC Support
⢠RAC support has some limitations in classic capture mode
â Extract can only run against one instance
â If instance fails,
⢠Manager must be stopped on failed node:
⢠Manager and extract must be started on a surviving node
â Failover can be configured in Oracle Grid Infrastructure
â Additional archive log switching may be required in archive log mode
â Before shutting down extract process
⢠Insert dummy record into a source table
⢠Switch log files on all nodes
â Additional configuration required to access ASM instance
⢠Shared storage for trails can be:
â OCFS
â ACFS
â DBFS
â No mention of NFS in the documentation
19
20. Goldengatetrailfile
Each record in the Trail file contains the following information:
¡ The operation type, such as an insert, update, or delete
¡ The transaction indicator (TransInd): 00 beginning, 01 middle, 02 end or 03
whole of transaction
¡ The before or after indicator (BeforeAfter) for update operations
¡ The commit timestamp
¡ The time that the change was written to the GoldenGate file
¡ The type of database operation
¡ The length of the record
¡ The Relative Byte Address (RBA) within the GoldenGate file
¡ The schema and table name
22. How Oracle GoldenGate Works
Modular De-Coupled Architecture
LAN/WAN
Internet
TCP/IP
Bi-directional
Capture
Trail
Pump Delivery
Trail
Source
Database(s)
Target
Database(s)
Capture: committed transactions are captured (and can be filtered) as they occur by reading
the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Route: data is compressed, encrypted
for routing to target(s).
Delivery: applies data with transaction integrity,
transforming the data as required.
23. How Oracle GoldenGate Works
LAN/WAN
Internet
Capture
Capture: committed transactions are captured (and can
be filtered) as they occur by reading the transaction logs.
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Or
Database(s)
24. How Oracle GoldenGate Works
LAN/WAN
Internet
Capture
Trail
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Ora
Database(s)
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
25. How Oracle GoldenGate Works
LAN/WAN
Internet
Capture
Trail
Pump
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Ora
Database(s)
26. How Oracle GoldenGate Works
LAN/WAN
Internet
TCP/IP
Capture
Trail
Pump
Trail
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Route: data is compressed,
encrypted for routing to target(s).
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Or
Database(s)
27. How Oracle GoldenGate Works
LAN/WAN
Internet
TCP/IP
Capture
Trail
Pump Delivery
Trail
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Route: data is compressed,
encrypted for routing to target(s).
Delivery: applies data with transaction
integrity, transforming the data as required.
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-O
Database(s)
28. How Oracle GoldenGate Works
LAN/WAN
Internet
TCP/IP
Capture
Trail
Pump Delivery
Trail
Capture: committed transactions are captured (and can be
filtered) as they occur by reading the transaction logs.
Trail: stages and queues data for routing.
Pump: distributes data for routing to target(s).
Route: data is compressed,
encrypted for routing to target(s).
Delivery: applies data with transaction
integrity, transforming the data as required.
Source
Oracle & Non-Oracle
Database(s)
Target
Oracle & Non-Ora
Database(s)
Bi-directional
29. ⢠Capture, Pump, and Delivery save positions to a checkpoint file so they can recover in
case of failure
GoldenGate Checkpointing
Capture DeliveryPump
Commit Ordered
Source Trail
Commit Ordered
Target Trail
Source
Database
Target
Database
Begin, TX 1
Insert, TX 1
Begin, TX 2
Update, TX 1
Insert, TX 2
Commit, TX 2
Begin, TX 3
Insert, TX 3
Begin, TX 4
Commit, TX 3
Delete, TX 4
Begin, TX 2
Insert, TX 2
Commit, TX 2
Begin, TX 3
Insert, TX 3
Commit, TX 3
Begin, TX 2
Insert, TX 2
Commit, TX 2
Start of Oldest Open (Uncommitted)
Transaction
Current Read
Position
Capture
Checkpoint
Current
Write
Position
Current
Read
Position
Pump
Checkpoint
Current
Write
Position
Current
Read
Position
Delivery
Checkpoint
32. Structure of checkpoint table
⢠SQL> desc ggate.ggschkpt
⢠Name Null? Type
⢠----------------------------------------- -------- ----------------------------
⢠GROUP_NAME NOT NULL VARCHAR2(8)
⢠GROUP_KEY NOT NULL NUMBER(19)
⢠SEQNO NUMBER(10)
⢠RBA NOT NULL NUMBER(19)
⢠AUDIT_TS VARCHAR2(29)
⢠CREATE_TS NOT NULL DATE
⢠LAST_UPDATE_TS NOT NULL DATE
⢠CURRENT_DIR NOT NULL VARCHAR2(255)
⢠LOG_CSN VARCHAR2(129)
⢠LOG_XID VARCHAR2(129)
⢠LOG_CMPLT_CSN VARCHAR2(129)
⢠LOG_CMPLT_XIDS VARCHAR2(2000)
⢠VERSION NUMBER(3)
33. Goldengate download and installation
⢠Go to https://edelivery.oracle.com/
⢠Accept term and condition after signin
⢠Select product pack and platform
⢠Product pack : Oracle fusion middleware
⢠Platform : AIX or Oracle solaris (as per your req)
⢠Click on go
⢠Select goldengate management pack for oracle database
⢠Download it to your desktop or preffered location
⢠Once download is finished. ftp or winscp this tar zip file to your server
goldengate home directory (lets say /ogg/11.2)
⢠Create .profile for this goldengate
⢠Untar this file to home directory
⢠Tar âxvf abc.tar
⢠./ggsci
⢠You will redirect to ggsci interface
⢠GGSCI > create subdirs
⢠Start mgr
34. ⢠/ogg/11.2 should have atleast 500 gb space
⢠Ask dba to create goldengate user in database..lets say gguser is goldengate user
⢠Gguser should have connect/resource and select privilege in source and dba
privilege in target if we are doing DML replication
⢠GGUSER should have dba role in source and target if we are using DDL/DML both
replication.
⢠SQL> create tablespace gg_data
2 datafile â/u02/oradata/gg_data01.dbfâ size 200m;
⢠SQL> create user gguser identified by gguser
2 default tablespace ggs_data
3 temporary tablespace temp;
⢠User created.
Prerequisites
35. ⢠SQL> grant connect,resource to ggs_owner;
⢠Grant succeeded.
⢠SQL> grant select any dictionary, select any table to ggs_owner;
⢠Grant succeeded.
⢠SQL> grant create table to ggs_owner;
⢠Grant succeeded.
⢠SQL> grant flashback any table to ggs_owner;
⢠Grant succeeded.
⢠SQL> grant execute on dbms_flashback to ggs_owner;
⢠Grant succeeded.
⢠SQL> grant execute on utl_file to ggs_owner;
⢠Grant succeeded.
⢠Check the login
⢠Ggsci> dblogin userid gguser,password gguser
38. Supplemental Logging
â Required at the database level (Minimum Level)
⢠ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
⢠Minimal Supplemental Logging
âlogs the minimal amount of information needed for LogMiner to identify, group, and merge the redo
operations associated with DML changesâ
â Identification key logging (PK, UK, FK)
⢠Table Level
⢠Database Level
⢠Method
â SQL e.g. ALTER TABLE <> ADD SUPPLEMENTAL LOG DATA âŚ
â GoldenGate e.g. ADD TRANDATA <Table Name>
39. Supplemental Logging
Tim
e
Source
t1 Create table
t2 ADD SUPPLEMENTAL LOG DATA (ALL)
COLUMNS
t3 Create PK index
t4 add supplemental log data (id) (temp)
t5 drop SUPPLEMENTAL LOG DATA (ALL)
T6 add supplemental log data (id)
T7 drop supplemental log data (id) (temp)
Tim
e
Target
t1 Create table
t2 Create PK index
t3 ADD SUPPLEMENTAL LOG DATA (ALL)
COLUMNS
t4 add supplemental log data (id) (temp)
t5 drop supplemental log data (id)
(temp)
1. ADDTRANDATA on source for new object.
2. Monitor replicated object on target and add
SL manually before any DML change
40. Add trandata
⢠For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log
group includes one of the following sets of columns, in the listed order of priority, depending on what is
defined on the table:
1. Primary key
2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased
columns, and no nullable columns
3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased
columns, but can include nullable columns
4. If none of the preceding key types exist (even though there might be other types of keys
defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
the database allows to be used in a unique key, excluding virtual columns, UDTs,
function-based columns, and any columns that are explicitly excluded from the Oracle
GoldenGate configuration.
⢠The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause thatis
appropriate for the type of unique constraint (or lack of one) that is defined for the table.
⢠If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA
command to log the required supplemental data. It is possible to use ADD
TRANDATA when DDL support is enabled, but only if you can guarantee one of the following:
â You can stop DML activity on any and all tables before users or applications perform DDL on them.
â You cannot stop DML activity before the DDL occurs, but you can guarantee that:
41. Project overview
⢠PMSPRD SERVER PMSPRD2 TARGET SERVER
DML REPLICATION
SOURCE TARGET
DR
REPORTING
W
A
N
T
C
P
PMSPRD SERVER
DR SERVER
48. ⢠Replicate in DR
⢠REPLICAT R_actDR
⢠SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")
⢠SETENV (ORACLE_HOME ="/u01/app/oracle/product/10.2.0.4a")
⢠SETENV (ORACLE_SID = "pmspr40")
⢠--
⢠ASSUMETARGETDEFS
⢠APPLYNOOPUPDATES
⢠--
⢠DISCARDFILE ./dirrpt/R_actDR.dsc, APPEND, MEGABYTES 50
⢠--
⢠GETTRUNCATES
⢠--
⢠USERID gguser, PASSWORD gguser
⢠--
⢠MAP sa.*, TARGET sa.*;
⢠--
⢠-- Enter the following statements from GGSCI before starting this process
⢠--
⢠-- delete replicat R_actDR
⢠-- add replicat R_actDR, exttrail /dirdat/pa
49. Logdump utility
⢠Logdump (The Oracle GoldenGate Log File Dump Utility) is a useful Oracle GoldenGate utility that
enables you to search for, filter, view, and save data that is stored in a GoldenGate trail or extract
file. The following table summarizes the useful commands of logdump.
50. Logdump continue
⢠1. Find out which local (exttrail) and remote trail (rmttrail) files the data pump is reading and writing to.
GGSCI> info <pump>, detail
2. In Logdump, issue the following series of commands to open the remote trail and find the last record.
Logdump> open <trail file from step 1>
Logdump> ghdr on
Logdump> count
Logdump> skip (<total count>-1)
Logdump> n (for next)
3. Write down the Table name, AuditPos and AuditRBA of the last record. (Do not be confused by the Logdump field names: AuditRBA
shows the Oracle redo log sequence number, and AuditPos shows the relative byte address of the record.)
⢠The following example shows that the operation was captured from Oracle's redo log sequence number 1941 at RBA 1965756.
Logdump 12 >n
___________________________________________________________________
Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 12383 (x305f) IO Time : 2005/11/17 16:02:38.295.847
IOType : 5 (x05) OrigNode : 255 (xff)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 1941 AuditPos : 1965756
Continued : N (x00) RecCount : 1 (x01)
2005/11/17 16:02:38.295.847 Insert Len 12383 RBA 7670707
Name: LLI.S_EVT_ACT
After Image: Partition 4
0000 0009 0000 0005 3532 3432 3900 0100 1500 0032 | ........52429......2
3030 352D 3131 2D31 373A 3136 3A30 313A 3531 0002 | 005-11-17:16:01:51..
000A 0000 0006 5445 5354 2031 0003 0015 0000 3230 | ......TEST 1......20
3035 2D31 312D 3137 3A31 363A 3031 3A35 3100 0400 | 05-11-17:16:01:51...
0A00 0000 0654 4553 5420 3200 0500 0500 0000 0130 | .....TEST 2........0
0006 000A 0000 0000 0000 0000 0000 0007 0005 0000 | ....................
0001 3000 0800 0B00 0000 0778 2035 3234 3239 0009 | ..0........x 52429..
51. Logdump continue
⢠4. In Logdump, issue the following series of commands to open the local trail file and find the record that corresponds to the one you found
in Step 2.
Logdump>open <exttrail file>
⢠Logdump>ghdr on
Logdump>filter include AuditRBA <number>
Logdump>filter include filename <table_name>
Logdump>filter match all
Logdump>n
⢠Note: The Logdump filter command to search for a record's redo RBA is "AuditRBA", even though the trail record's header calls it
"AuditPos". This is an anomaly of the Logdump program.
â˘
5. Verify that the operation is from the same Oracle redo log sequence number as the one you recorded in step 3. (AuditRBA as shown in
the trail header), as there is no way to filter for this in Logdump.
Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 12383 (x305f) IO Time : 2005/11/17 16:01:51.000.000
IOType : 5 (x05) OrigNode : 255 (xff)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 1941 AuditPos : 1965756 Ă same as the one in step 2.
Continued : N (x00) RecCount : 1 (x01)
2005/11/17 16:01:51.000.000 Insert Len 12383 RBA 7297435
Name: LLI.S_EVT_ACT
After Image: Partition 4
0000 0009 0000 0005 3532 3432 3900 0100 1500 0032 | ........52429......2
3030 352D 3131 2D31 373A 3136 3A30 313A 3531 0002 | 005-11-17:16:01:51..
000A 0000 0006 5445 5354 2031 0003 0015 0000 3230 | ......TEST 1......20
3035 2D31 312D 3137 3A31 363A 3031 3A35 3100 0400 | 05-11-17:16:01:51...
0A00 0000 0654 4553 5420 3200 0500 0500 0000 0130 | .....TEST 2........0
0006 000A 0000 0000 0000 0000 0000 0007 0005 0000 | ....................
0001 3000 0800 0B00 0000 0778 2035 3234 3239 0009 | ..0........x 52429..
Filtering suppressed 3289 records
6. Compare the output of the record (data part) from the remote trail with that of the record from the local trail, and make sure it is the
same. This shows they are the same data record.
52. Logdump continue
⢠7. In Logdump, issue the following commands to clear the filter and find the next record in the local trail after the one found in Step
4. Logdump searches based on the GoldenGate record's RBA within the trail file (the "extrba"). Write down the GoldenGate RBA. In
the following example, it would be 7309910.
Logdump>filter clear
Logdump>n
___________________________________________________________________
Hdr-Ind : E (x45) Partition : . (x04)
UndoFlag : . (x00) BeforeAfter: A (x41)
RecLength : 1005 (x03ed) IO Time : 2005/11/17 16:01:52.000.000
IOType : 5 (x05) OrigNode : 255 (xff)
TransInd : . (x03) FormatType : R (x52)
SyskeyLen : 0 (x00) Incomplete : . (x00)
AuditRBA : 1941 AuditPos : 1979280
Continued : N (x00) RecCount : 1 (x01)
2005/11/17 16:01:52.000.000 Insert Len 1005 RBA 7309910
Name: LLI.S_EVT_ACT_FNX
After Image: Partition 4
0000 000A 0000 0006 3130 3438 3537 0001 0015 0000 | ........104857......
3230 3035 2D31 312D 3137 3A31 363A 3031 3A35 3100 | 2005-11-17:16:01:51.
0200 0A00 0000 0654 4553 5420 3100 0300 1500 0032 | .......TEST 1......2
3030 352D 3131 2D31 373A 3136 3A30 313A 3531 0004 | 005-11-17:16:01:51..
000A 0000 0006 5445 5354 2032 0005 000A 0000 0000 | ......TEST 2........
0000 0000 0000 0006 0005 0000 0001 3000 0700 0D00 | ..............0.....
0000 0950 4944 3130 3438 3537 0008 0003 0000 4E00 | ...PID104857......N.
â˘
8. In GGSCI, alter the data pump to start processing at the trail sequence number (extseqno) and relative byte address (extrba) noted
in Step 7. The trail sequence number is the local trail that you opened in Logdump in step 4.
GGSCI> alter extract <pump>, extseqno <trail sequence no>, extrba <trail rba>
For the previous example (assuming the trail sequence number is 000003), the syntax would be:
alter extract pmpsbl, extseqno 3, extrba 7309910
9. Start the data pump.
53. Logdump other command
⢠Search for Timestamp in Trail File
⢠Logdump 6 >sfts 2013-08-04 <âsearches for the timestamp â2013-08-04â
⢠>filter include filename FIN_AUDIT.FIN_GL_ATTR
Which will filter out all records that do not contain the table name
specified and narrow down the set of records that subsequent commands
will operate on until a â>filter clearâ is issued
⢠>pos FIRST
Which will go to the first record in the file
⢠>pos <rba>
Which will go to a specific rba in the trail file
⢠>log to <filename>.txt
54. Filtering Records
You can do some pretty fancy stuff with LOGDUMP filtering. A whole suite of commands are set aside for
this. We can filter on just about anything that exists in the Trail file, such as process name, RBA, record
length, record type, even a string!
The following example shows the required syntax to filter on DELETE operations. Note that
LOGDUMP reports how many records have been excluded by the filter.
Logdump 52 >filter include iotype delete
Logdump 53 >n
2010/11/09 13:31:40.000.000 Delete Len 17 RBA 5863
Name: SRC.USERS
Before Image: Partition 4 G b
0000 000d 0000 0009 414e 4f4e 594d 4f55 53 | ........ANONYMOUS
Filtering suppressed 42 records
LogdumpcontinueâŚ
55. Conditions for extract abend
⢠Manager of source is stop
⢠Database is down
⢠Server is down (patch or reboot)
⢠Lookup object created and lost in few second
⢠Lob object and long binary values in columns
⢠Redo log switch from database and archive log is not enabled in
database
56. Condition for Pump abend
⢠Database is down
⢠Source Server reboot or patch
⢠Target server reboot or patch
Condition for Replicate abend
â˘Source table does not exist in target database
â˘Target database is down for coldbackup
â˘Target Server is down
â˘Source table structure is not matching with Target database table
â˘Discard file size is exceeded with predefined limit
â˘Column position is differ in target from source database
â˘Replicat is not find current trail position from trail directory
57. Check Goldengate Replication Performance
⢠GGSCI > Stats <parameter_name> totalsonly *
⢠GGSCI > lag <parameter_name>
⢠GGSCI > send <pump> gettcpstats
⢠GGSCI > send <parameter_name> showch
⢠GGSCI> STATS <parameter_name> totalsonly *.*
⢠Both database should be in sync.
⢠Assumetargetdefinition or sourcedefinition should be used in replicate
parameter file.
⢠All conversion should be performed in replicate side.
Ideal condition for Replication
58. Cons of Replication
⢠DDL (data definition language) replication is still to mature and can be an issue.
Supplemental logging causes extra redo log generation in the log storage. Licensing
cost is higher as compared to other replication products. Configuration of Golden
Gate in HA (High Availability) mode needs setup of extra VIPs which can be
challenging in a RAC environment.
⢠Integration of Golden Gate with monitoring tools like OEM (Oracle Enterprise
Manager) is still not available and most DBAs monitor this with custom scripts.
⢠File system dependent
⢠âApplication synchronization must be designed in the application
Pros of Replication
â˘Storage hardware independent
â˘One solution for all applications/data on a server
â˘Failover/failback may be integrated and automated
â˘Read access for second copy,supporting horizontal scaling
â˘Low cost for software
â˘Easy to setup
â˘Easy to manage, especially in resuming and/or recovery of activities
â˘speed (faster than Streams)
59. Batchsql
Why Batchsql
In default mode the Replicat process will apply SQL to the target database, one statement at a time. This
often causes a performance bottleneck where the Replicat process cannot apply the changes quickly
enough, compared to the rate at which the Extract process delivers the data. Despite configuring additional
Replicat processes, performance may still be a problem
Use of Batchsql
GoldenGate has addressed this issue through the use of the BATCHSQL Replicat configuration parameter.
As the name implies, BATCHSQL organizes similar SQLs into batches and applies them all at once. The
batches are assembled into arrays in a memory queue on the target database server, ready to be applied.
Similar SQL statements would be those that perform a specific operation type (insert, update, or delete)
against the same target table, having the same column list. For example, multiple inserts into table A
would be in a different batch from inserts into table B, as would updates on table A or B. Furthermore,
referential constraints are considered by GoldenGate where dependant transactions in different batches
are executed first, depicting the order in which batches are executed. Despite the referential integrity
constraints, BATCHSQL can increase Replicat performance significantly.
BATCHSQL BATCHESPERQUEUE 100, OPSPERBATCH 2000
60. Example of batchsql performance
⢠GoldenGate replicat performance
⢠GGSCI> stats r_xxxx TOTALSONLY * REPORTRATE sec
⢠You get the detailed statistics of operations per second since the start of the process, since the beginning of the day and last hour for all
tables replicated by that process. For instance:
⢠*** Hourly statistics since 2012-04-16 13:00:00 ***
⢠Total inserts/second: 1397.76
⢠Total updates/second: 1307.46
⢠Total deletes/second: 991.50
⢠Total discards/second: 0.00
⢠Total operations/second: 3696.71
⢠So here we can see it is doing a bit more than 3500 operations per second, divided quite evenly between inserts, updates and deletes. As
usually the GoldenGate is used for realtime replication and there are no big operations, the client does not use performance related
parameters. But this time I decided to play with them. After adding both: BATCHSQL and INSERTAPPEND to the parameter file of the
replicat process, the results were the following (after more than 10 minutes running) and performance is still increasing:
⢠*** Hourly statistics since 2012-04-16 13:43:17 ***
⢠Total inserts/second: 2174.92
⢠Total updates/second: 2540.20
⢠Total deletes/second: 1636.16
⢠Total discards/second: 0.00
⢠Total operations/second: 6351.28
⢠We see the performance increased by 90% ! I got interested to see if the BATCHSQL parameter only by itself could make the difference. So
I removed the INSERTAPPEND parameter (which only influences the inserts anyway). Here are the results after more than 10 minutes.
⢠*** Hourly statistics since 2012-04-16 14:00:00 ***
⢠Total inserts/second: 2402.21
⢠Total updates/second: 2185.24
⢠Total deletes/second: 1742.43
⢠Total discards/second: 0.00
⢠Total operations/second: 6329.88
⢠Yep, seems the system of my client in certain situations benefits mostly of the BATCHSQL parameter. For those who don't know, "in
BATCHSQL mode, Replicat organizes similar SQL statements into batches within a memory queue, and then it applies each batch in one
database operation. A batch contains SQL statements that affect the same table, operation type (insert, update, or delete), and column
list."