We will configure initial load along with change sync inside Oracle Golden Gate.
Here is the full article link: https://www.support.dbagenesis.com/post/configure-golden-gate-initial-load-and-change-sync
Oracle Golden Gate Bidirectional ReplicationArun Sharma
Golden gate bidirectional replication is two-way unidirectional replication. Let us setup bi-directional replication for a single table from source to target.
Full article link is here: https://www.support.dbagenesis.com/post/oracle-golden-gate-bidirectional-replication
- Oracle Database is a comprehensive, integrated database management system that provides an open approach to information management.
- The Oracle architecture includes database structures like data files, control files, and redo log files as well as memory structures like the system global area (SGA) and process global area (PGA).
- Key components of the Oracle architecture include the database buffer cache, shared pool, redo log buffer, and background processes that manage instances.
An Oracle database instance consists of background processes that control one or more databases. A schema is a set of database objects owned by a user that apply to a specific application. Tables store data in rows and columns, and indexes and constraints help maintain data integrity and improve query performance. Database administrators perform tasks like installing and upgrading databases, managing storage, security, backups and high availability.
Rapid Home Provisioning is a new feature in Oracle Grid Infrastructure 12c R2 that provides a simplified way to provision and patch Oracle software and databases. It uses a centralized management server and golden images stored on ACFS to deploy pre-packaged and patched Oracle homes to client nodes. Administrators can easily create working copies of golden images, deploy databases from the working copies, and seamlessly patch databases by moving them to a working copy based on a newer patched golden image with a single command.
The document provides an overview of Oracle architecture including:
- Data is stored in data blocks which make up extents that form segments within tablespaces. Segments represent database objects like tables and indexes.
- The system global area (SGA) resides in memory and caches data and structures for efficient processing. It includes the database buffer cache, redo log buffer, and shared pool.
- Server processes handle SQL statements by parsing, executing, and returning results. Background processes perform functions like checkpoint, recovery, and writing data to disk.
- Transactions are written to the redo log and undo segments maintain rollback information. This supports data consistency, recovery, and rolling back transactions.
The document discusses the Performance Schema in MySQL. It provides an overview of what the Performance Schema is and how it can be used to monitor events within a MySQL server. It also describes how to configure the Performance Schema by setting up actors, objects, instruments, consumers and threads to control what is monitored. Finally, it explains how to initialize the Performance Schema by truncating existing summary tables before collecting new performance data.
Oracle Database performance tuning using oratopSandesh Rao
Oratop is a text-based user interface tool for monitoring basic database operations in real-time. This presentation will go into depth on how to use the tool and some example scenarios. It can be used for both RAC and single-instance databases and in combination with top to get a more holistic view of system performance and identify any bottlenecks.
Oracle Golden Gate Bidirectional ReplicationArun Sharma
Golden gate bidirectional replication is two-way unidirectional replication. Let us setup bi-directional replication for a single table from source to target.
Full article link is here: https://www.support.dbagenesis.com/post/oracle-golden-gate-bidirectional-replication
- Oracle Database is a comprehensive, integrated database management system that provides an open approach to information management.
- The Oracle architecture includes database structures like data files, control files, and redo log files as well as memory structures like the system global area (SGA) and process global area (PGA).
- Key components of the Oracle architecture include the database buffer cache, shared pool, redo log buffer, and background processes that manage instances.
An Oracle database instance consists of background processes that control one or more databases. A schema is a set of database objects owned by a user that apply to a specific application. Tables store data in rows and columns, and indexes and constraints help maintain data integrity and improve query performance. Database administrators perform tasks like installing and upgrading databases, managing storage, security, backups and high availability.
Rapid Home Provisioning is a new feature in Oracle Grid Infrastructure 12c R2 that provides a simplified way to provision and patch Oracle software and databases. It uses a centralized management server and golden images stored on ACFS to deploy pre-packaged and patched Oracle homes to client nodes. Administrators can easily create working copies of golden images, deploy databases from the working copies, and seamlessly patch databases by moving them to a working copy based on a newer patched golden image with a single command.
The document provides an overview of Oracle architecture including:
- Data is stored in data blocks which make up extents that form segments within tablespaces. Segments represent database objects like tables and indexes.
- The system global area (SGA) resides in memory and caches data and structures for efficient processing. It includes the database buffer cache, redo log buffer, and shared pool.
- Server processes handle SQL statements by parsing, executing, and returning results. Background processes perform functions like checkpoint, recovery, and writing data to disk.
- Transactions are written to the redo log and undo segments maintain rollback information. This supports data consistency, recovery, and rolling back transactions.
The document discusses the Performance Schema in MySQL. It provides an overview of what the Performance Schema is and how it can be used to monitor events within a MySQL server. It also describes how to configure the Performance Schema by setting up actors, objects, instruments, consumers and threads to control what is monitored. Finally, it explains how to initialize the Performance Schema by truncating existing summary tables before collecting new performance data.
Oracle Database performance tuning using oratopSandesh Rao
Oratop is a text-based user interface tool for monitoring basic database operations in real-time. This presentation will go into depth on how to use the tool and some example scenarios. It can be used for both RAC and single-instance databases and in combination with top to get a more holistic view of system performance and identify any bottlenecks.
This document provides an overview of the Oracle database architecture. It describes the major components of Oracle's architecture, including the memory structures like the system global area and program global area, background processes, and the logical and physical storage structures. The key components are the database buffer cache, redo log buffer, shared pool, processes, tablespaces, data files, and redo log files.
A duplicate (clone or snapshot) database is useful for a variety of purposes, most of which involve testing &
upgrade
• You can perform the following tasks in a duplicate database:
• Test backup and recovery procedures
• Test an upgrade to a new release of Oracle Database
• Test the effect of applications on database performance
• Create a standby database (Dataguard) with DG Broker
• Leverage on Transient Logical Standby to perform an upgrade
• Generate reports
The document summarizes a presentation on the internals of InnoDB file formats and source code structure. The presentation covers the goals of InnoDB being optimized for online transaction processing (OLTP) with performance, reliability, and scalability. It describes the InnoDB architecture, on-disk file formats including tablespaces, pages, rows, and indexes. It also discusses the source code structure.
An Oracle database consists of physical files on disk that store data and logical memory structures that manage the files. The database is made up of data files that contain tables and indexes, control files that track the physical components, and redo log files that record changes. The instance in memory associates with one database and manages access through background processes. The database is divided into logical storage units called tablespaces that map to the physical data files. Common tablespaces include SYSTEM, SYSAUX, undo and temporary tablespaces.
How to use histograms to get better performanceMariaDB plc
Sergei Petrunia and Varun Gupta, software engineers MariaDB, show how histograms can be used to improve query performance. They begin by introducing histrograms and explaining why they’re needed by the query optimizer. Next, they discuss how to determine whether or not histrograms are needed, and if so, how to determine which tables and columns they should be applied. Finally, they cover best practices and recent improvements to histograms.
Best Practices for Becoming an Exceptional Postgres DBA EDB
Drawing from our teams who support hundreds of Postgres instances and production database systems for customers worldwide, this presentation provides real-real best practices from the nation's top DBAs. Learn top-notch monitoring and maintenance practices, get resource planning advice that can help prevent, resolve, or eliminate common issues, learning top database tuning tricks for increasing system performance and ultimately, gain greater insight into how to improve your effectiveness as a DBA.
This document discusses administering user security in an Oracle database. It covers creating and managing database user accounts, assigning privileges, creating and managing roles, and creating and managing profiles to implement password security and control resource usage. Specific topics covered include authenticating users, predefined administrative accounts, creating users, granting and revoking privileges, benefits of roles, assigning privileges to roles and roles to users, predefined roles, creating roles, profiles and password security features, and creating a password profile.
How to Use EXAchk Effectively to Manage Exadata EnvironmentsSandesh Rao
This document discusses using the Autonomous Health Framework (AHF) to manage Exadata environments. AHF includes EXAchk for compliance checking and fault detection on Exadata. EXAchk can be run automatically or on-demand to check for compliance issues and potential problems. It integrates with tools like Enterprise Manager, MOS, and TFA to provide centralized reporting and issue resolution. The document provides instructions for installing and configuring AHF and EXAchk for optimal use.
This presentation covers MySQL data encryption at disk. How to encrypt all tablespaces and MySQL related files for the compliances ? The new releases in MySQL 8.0 take care of the encryption of the system tablespace and supporting tables unlike MySQL 5.7.
The document discusses managing an Oracle database instance. It covers:
1. Starting and stopping the Oracle database and components like Database Control using commands like emctl and sqlplus.
2. Using tools like SQL*Plus, Enterprise Manager, and dynamic performance views to access and modify initialization parameters, view alert logs, and manage the database.
3. The stages of database startup including nomount, mount, and open and database shutdown options like normal, transactional, and immediate.
Krzysztof Ksiazek - Severalnines AB
So, you are a developer or sysadmin and showed some abilities in dealing with databases issues. And now, you have been elected to the role of DBA. And as you start managing the databases, you wonder…
* How do I tune them to make best use of the hardware?
* How do I optimize the Operating System?
* How do I best configure MySQL or MariaDB for a specific database workload?
If you're asking yourself the following questions when it comes to optimally running your MySQL or MariaDB databases, then this talk is for you!
We will discuss some of the settings that are most often tweaked and which can bring you significant improvement in the performance of your MySQL or MariaDB database. We will also cover some of the variables which are frequently modified even though they should not.
Performance tuning is not easy, especially if you're not an experienced DBA, but you can go a surprisingly long way with a few basic guidelines.
The document provides an overview of Oracle Database including its architecture, components, and functions. It discusses Oracle's three-level database architecture consisting of the external, conceptual, and internal levels. It also describes Oracle's memory structure including the shared pool, database buffer cache, and redo log buffer. Key Oracle background processes like DBWR, LGWR, PMON, SMON, and CKPT are summarized.
The document discusses configuring Oracle's network environment. It describes using tools like Enterprise Manager and tnsping to manage listeners, configure net service aliases, and test connectivity. It also covers establishing connections, naming methods, and using shared vs dedicated server processes.
- Oracle Data Guard is a data protection and disaster recovery solution that maintains up to 9 synchronized standby databases to protect enterprise data from failures, disasters, errors, and corruptions.
- Data Guard uses redo apply and SQL apply technologies to synchronize primary and standby databases by transmitting redo logs from the primary and applying the redo logs on the standby databases.
- Data Guard allows role transitions like switchovers and failovers between primary and standby databases to minimize downtime during planned and unplanned outages.
PostgreSQL Replication High Availability MethodsMydbops
This slides illustrates the need for replication in PostgreSQL, why do you need a replication DB topology, terminologies, replication nodes and many more.
MariaDB MaxScale is a database proxy that provides scalability, high availability, and data streaming capabilities for MariaDB and MySQL databases. It acts as a load balancer and router to distribute queries across database servers. MaxScale supports services like read/write splitting, query caching, and security features like selective data masking. It can monitor replication lag and route queries accordingly. MaxScale uses a plugin architecture and its core remains stateless to provide flexibility and high performance.
Tanel Poder - Performance stories from Exadata MigrationsTanel Poder
Tanel Poder has been involved in a number of Exadata migration projects since its introduction, mostly in the area of performance ensurance, troubleshooting and capacity planning.
These slides, originally presented at UKOUG in 2010, cover some of the most interesting challenges, surprises and lessons learnt from planning and executing large Oracle database migrations to Exadata v2 platform.
This material is not just repeating the marketing material or Oracle's official whitepapers.
Understanding SQL Trace, TKPROF and Execution Plan for beginnersCarlos Sierra
The three fundamental steps of SQL Tuning are: 1) Diagnostics Collection; 2) Root Cause Analysis (RCA); and 3) Remediation. This introductory session on SQL Tuning is for novice DBAs and Developers that are required to investigate a piece of an application performing poorly.
On this session participants will learn about producing a SQL Trace then a summary TKPROF report. A sample TKPROF is navigated with the audience, where the trivial and the no so trivial is exposed and explain. Execution Plans are also navigated and explained, so participants can later untangle complex Execution Plans and start diagnosing SQL performing badly.
This session encourages participants to ask all kind of questions that could be potential road-blocks for deeper understanding of how to address a SQL performing poorly.
Understanding my database through SQL*Plus using the free tool eDB360Carlos Sierra
This session introduces eDB360 - a free tool that is executed from SQL*Plus and generates a set of reports providing a 360-degree view of an Oracle database; all without installing anything on the database.
If using Oracle Enterprise Manager (OEM) is off-limits for you or your team, and you can only access the database thorough a SQL*Plus connection with no direct access to the database server, then this tool is a perfect fit to provide you with a broad overview of the database configuration, performance, top SQL and much more. You only need a SQL*Plus account with read access to the data dictionary, and common Oracle licenses like the Diagnostics or the Tuning Pack.
Typical uses of this eDB360 tool include: databases health-checks, performance assessments, pre or post upgrade verifications, snapshots of the environment for later use, compare between two similar environments, documenting the state of a database when taking ownership of it, etc.
Once you learn how to use eDB360 and get to appreciate its value, you may want to execute this tool on all your databases on a regular basis, so you can keep track of things for long periods of time. This tool is becoming part of a large collection of goodies many DBAs use today.
During this session you will learn the basics about the free eDB360 tool, plus some cool tricks. The target audience is: DBAs, developers and consultants (some managers could also benefit).
What to Expect From Oracle database 19cMaria Colgan
The Oracle Database has recently switched to an annual release model. Oracle Database 19c is only the second release in this new model. So what can you expect from the latest version of the Oracle Database? This presentation explains how Oracle Database 19c is really 12.2.0.3 the terminal release of the 12.2 family and the new features you can find in this release.
To build a Taco LoadMatch design system, one must:
1. Build the main loop with a boiler or chiller and connect components.
2. Insert imposed loads and connect them to the main loop.
3. Enter load data for each imposed load.
4. Set the main loop fluid temperature and flow rate.
This document provides an overview of the Oracle database architecture. It describes the major components of Oracle's architecture, including the memory structures like the system global area and program global area, background processes, and the logical and physical storage structures. The key components are the database buffer cache, redo log buffer, shared pool, processes, tablespaces, data files, and redo log files.
A duplicate (clone or snapshot) database is useful for a variety of purposes, most of which involve testing &
upgrade
• You can perform the following tasks in a duplicate database:
• Test backup and recovery procedures
• Test an upgrade to a new release of Oracle Database
• Test the effect of applications on database performance
• Create a standby database (Dataguard) with DG Broker
• Leverage on Transient Logical Standby to perform an upgrade
• Generate reports
The document summarizes a presentation on the internals of InnoDB file formats and source code structure. The presentation covers the goals of InnoDB being optimized for online transaction processing (OLTP) with performance, reliability, and scalability. It describes the InnoDB architecture, on-disk file formats including tablespaces, pages, rows, and indexes. It also discusses the source code structure.
An Oracle database consists of physical files on disk that store data and logical memory structures that manage the files. The database is made up of data files that contain tables and indexes, control files that track the physical components, and redo log files that record changes. The instance in memory associates with one database and manages access through background processes. The database is divided into logical storage units called tablespaces that map to the physical data files. Common tablespaces include SYSTEM, SYSAUX, undo and temporary tablespaces.
How to use histograms to get better performanceMariaDB plc
Sergei Petrunia and Varun Gupta, software engineers MariaDB, show how histograms can be used to improve query performance. They begin by introducing histrograms and explaining why they’re needed by the query optimizer. Next, they discuss how to determine whether or not histrograms are needed, and if so, how to determine which tables and columns they should be applied. Finally, they cover best practices and recent improvements to histograms.
Best Practices for Becoming an Exceptional Postgres DBA EDB
Drawing from our teams who support hundreds of Postgres instances and production database systems for customers worldwide, this presentation provides real-real best practices from the nation's top DBAs. Learn top-notch monitoring and maintenance practices, get resource planning advice that can help prevent, resolve, or eliminate common issues, learning top database tuning tricks for increasing system performance and ultimately, gain greater insight into how to improve your effectiveness as a DBA.
This document discusses administering user security in an Oracle database. It covers creating and managing database user accounts, assigning privileges, creating and managing roles, and creating and managing profiles to implement password security and control resource usage. Specific topics covered include authenticating users, predefined administrative accounts, creating users, granting and revoking privileges, benefits of roles, assigning privileges to roles and roles to users, predefined roles, creating roles, profiles and password security features, and creating a password profile.
How to Use EXAchk Effectively to Manage Exadata EnvironmentsSandesh Rao
This document discusses using the Autonomous Health Framework (AHF) to manage Exadata environments. AHF includes EXAchk for compliance checking and fault detection on Exadata. EXAchk can be run automatically or on-demand to check for compliance issues and potential problems. It integrates with tools like Enterprise Manager, MOS, and TFA to provide centralized reporting and issue resolution. The document provides instructions for installing and configuring AHF and EXAchk for optimal use.
This presentation covers MySQL data encryption at disk. How to encrypt all tablespaces and MySQL related files for the compliances ? The new releases in MySQL 8.0 take care of the encryption of the system tablespace and supporting tables unlike MySQL 5.7.
The document discusses managing an Oracle database instance. It covers:
1. Starting and stopping the Oracle database and components like Database Control using commands like emctl and sqlplus.
2. Using tools like SQL*Plus, Enterprise Manager, and dynamic performance views to access and modify initialization parameters, view alert logs, and manage the database.
3. The stages of database startup including nomount, mount, and open and database shutdown options like normal, transactional, and immediate.
Krzysztof Ksiazek - Severalnines AB
So, you are a developer or sysadmin and showed some abilities in dealing with databases issues. And now, you have been elected to the role of DBA. And as you start managing the databases, you wonder…
* How do I tune them to make best use of the hardware?
* How do I optimize the Operating System?
* How do I best configure MySQL or MariaDB for a specific database workload?
If you're asking yourself the following questions when it comes to optimally running your MySQL or MariaDB databases, then this talk is for you!
We will discuss some of the settings that are most often tweaked and which can bring you significant improvement in the performance of your MySQL or MariaDB database. We will also cover some of the variables which are frequently modified even though they should not.
Performance tuning is not easy, especially if you're not an experienced DBA, but you can go a surprisingly long way with a few basic guidelines.
The document provides an overview of Oracle Database including its architecture, components, and functions. It discusses Oracle's three-level database architecture consisting of the external, conceptual, and internal levels. It also describes Oracle's memory structure including the shared pool, database buffer cache, and redo log buffer. Key Oracle background processes like DBWR, LGWR, PMON, SMON, and CKPT are summarized.
The document discusses configuring Oracle's network environment. It describes using tools like Enterprise Manager and tnsping to manage listeners, configure net service aliases, and test connectivity. It also covers establishing connections, naming methods, and using shared vs dedicated server processes.
- Oracle Data Guard is a data protection and disaster recovery solution that maintains up to 9 synchronized standby databases to protect enterprise data from failures, disasters, errors, and corruptions.
- Data Guard uses redo apply and SQL apply technologies to synchronize primary and standby databases by transmitting redo logs from the primary and applying the redo logs on the standby databases.
- Data Guard allows role transitions like switchovers and failovers between primary and standby databases to minimize downtime during planned and unplanned outages.
PostgreSQL Replication High Availability MethodsMydbops
This slides illustrates the need for replication in PostgreSQL, why do you need a replication DB topology, terminologies, replication nodes and many more.
MariaDB MaxScale is a database proxy that provides scalability, high availability, and data streaming capabilities for MariaDB and MySQL databases. It acts as a load balancer and router to distribute queries across database servers. MaxScale supports services like read/write splitting, query caching, and security features like selective data masking. It can monitor replication lag and route queries accordingly. MaxScale uses a plugin architecture and its core remains stateless to provide flexibility and high performance.
Tanel Poder - Performance stories from Exadata MigrationsTanel Poder
Tanel Poder has been involved in a number of Exadata migration projects since its introduction, mostly in the area of performance ensurance, troubleshooting and capacity planning.
These slides, originally presented at UKOUG in 2010, cover some of the most interesting challenges, surprises and lessons learnt from planning and executing large Oracle database migrations to Exadata v2 platform.
This material is not just repeating the marketing material or Oracle's official whitepapers.
Understanding SQL Trace, TKPROF and Execution Plan for beginnersCarlos Sierra
The three fundamental steps of SQL Tuning are: 1) Diagnostics Collection; 2) Root Cause Analysis (RCA); and 3) Remediation. This introductory session on SQL Tuning is for novice DBAs and Developers that are required to investigate a piece of an application performing poorly.
On this session participants will learn about producing a SQL Trace then a summary TKPROF report. A sample TKPROF is navigated with the audience, where the trivial and the no so trivial is exposed and explain. Execution Plans are also navigated and explained, so participants can later untangle complex Execution Plans and start diagnosing SQL performing badly.
This session encourages participants to ask all kind of questions that could be potential road-blocks for deeper understanding of how to address a SQL performing poorly.
Understanding my database through SQL*Plus using the free tool eDB360Carlos Sierra
This session introduces eDB360 - a free tool that is executed from SQL*Plus and generates a set of reports providing a 360-degree view of an Oracle database; all without installing anything on the database.
If using Oracle Enterprise Manager (OEM) is off-limits for you or your team, and you can only access the database thorough a SQL*Plus connection with no direct access to the database server, then this tool is a perfect fit to provide you with a broad overview of the database configuration, performance, top SQL and much more. You only need a SQL*Plus account with read access to the data dictionary, and common Oracle licenses like the Diagnostics or the Tuning Pack.
Typical uses of this eDB360 tool include: databases health-checks, performance assessments, pre or post upgrade verifications, snapshots of the environment for later use, compare between two similar environments, documenting the state of a database when taking ownership of it, etc.
Once you learn how to use eDB360 and get to appreciate its value, you may want to execute this tool on all your databases on a regular basis, so you can keep track of things for long periods of time. This tool is becoming part of a large collection of goodies many DBAs use today.
During this session you will learn the basics about the free eDB360 tool, plus some cool tricks. The target audience is: DBAs, developers and consultants (some managers could also benefit).
What to Expect From Oracle database 19cMaria Colgan
The Oracle Database has recently switched to an annual release model. Oracle Database 19c is only the second release in this new model. So what can you expect from the latest version of the Oracle Database? This presentation explains how Oracle Database 19c is really 12.2.0.3 the terminal release of the 12.2 family and the new features you can find in this release.
To build a Taco LoadMatch design system, one must:
1. Build the main loop with a boiler or chiller and connect components.
2. Insert imposed loads and connect them to the main loop.
3. Enter load data for each imposed load.
4. Set the main loop fluid temperature and flow rate.
We can leverage Delta Lake, structured streaming for write-heavy use cases. This talk will go through a use case at Intuit whereby we built MOR as an architecture to allow for a very low SLA, etc. For MOR, there are different ways to view the fresh data, so we will also go over the methods used to perfTest the various ways that we were able to arrive at the best method for the given use case.
The document discusses how operating systems use processes and scheduling to allow multiple programs to run simultaneously. It explains that a process is the running instance of a program and contains the program counter, registers, memory allocation, and other state information. The operating system uses process scheduling and a process control block (PCB) for each process to track status, allocate CPU time, and handle interrupts and blocking for I/O. It outlines common scheduling algorithms like first-come first-served, shortest job next, priority, and round robin.
The document discusses how operating systems manage multiple processes running simultaneously. It explains that the processor manager is made up of a process scheduler and job scheduler. The job scheduler takes groups of processes (called jobs) and reorders them to balance CPU-intensive and I/O-intensive processes. It then passes the processes to the process scheduler. The process scheduler rapidly switches the CPU between processes, setting their status as ready, running, waiting, or finished depending on whether they are awaiting input/output or currently using the CPU.
The document provides instructions for creating input datasets and running a job flow for velocity modeling in Diamond's Velocity Workbench. It describes starting Job Flow Manager, creating a new job, adding modules to import data, run velocity analysis tools, and export outputs. Specifically, it covers adding Begin, Disk Data Input, VELSTK, and VELCOH modules to set parameters and run the job. Finally, it discusses opening Velocity Workbench, loading the resulting datasets, setting the geometry, and adjusting panel parameters to pick velocities from the coherency gathers.
Test Driven Database Development With Data DudeCory Foy
The document provides an overview and demo of Test-Driven Database Development with DataDude. It discusses how DataDude allows developers to put databases under version control, compare schemas, develop and run unit tests against database objects, and generate test data. The demo shows using DataDude to version control a database, refactor elements, unit test stored procedures, and generate test data to populate tables for testing purposes. Other database testing tools and more information resources are also listed.
This document provides instructions for setting up hot standby replication between a primary and secondary PostgreSQL database. It describes configuring WAL archiving on the primary, taking a backup of the primary to initialize the secondary, creating a recovery.conf file on the secondary, and testing replication. It also explains how to trigger a failover, switchover, and rebuild the primary database after a failover.
PgConf US 2015 - ALTER DATABASE ADD more SANITYOleksii Kliukin
Zalando SE is one of the largest fashion retailers in Europe, relying on hundreds of PostgreSQL databases to support millions of daily transactions. To manage changes to their databases, they use a combination of tools including Sqitch for incremental database changes, Versioning to track database diffs, and in-house tools for data migrations. Their process involves developing database changes as SQL diffs, reviewing and testing them, and deploying them through multiple stages without taking databases offline. This allows new database and API versions to be rolled out while keeping applications running.
SAP ABAP lsmw beginner lerning tutorial.pdfPhani Pavan
LSMW is a tool in SAP used to migrate legacy data into SAP. It has a recording tool to record transactions and fields to upload. This document outlines the 15 step process to use LSMW to upload vendor master data from a text file into SAP, including creating a project, recording the vendor creation transaction, mapping fields, creating an upload file, reading and converting the data, and running the batch input session to upload the records. Testing is done along the way to validate correct mapping and data conversion before uploading all records.
SAP ABAP lsmw beginner lerning tutorial.pdfPhani Pavan
LSMW is a tool in SAP used to migrate legacy data into SAP. It has a recording tool to record transactions and fields to upload. This document outlines the 15 step process to use LSMW to upload vendor master data from a text file into SAP, including creating a project, recording the vendor creation transaction, mapping fields, creating an upload file, reading and converting the data, and running the batch input session to upload the records. Testing is done along the way to validate correct mapping and data conversion before uploading all records.
Lsmw (Legacy System Migration Workbench)Leila Morteza
This document provides instructions for using SAP's Legacy System Migration Workbench (LSMW) tool to migrate legacy vendor master data into SAP. It outlines the 15 steps to create an LSMW project and upload vendor records, including recording transactions, mapping fields, uploading a data file, reading and converting the data, and running a batch input session to complete the migration. The instructions are accompanied by screenshots to illustrate each step in the process.
Ilya Kosmodemiansky - An ultimate guide to upgrading your PostgreSQL installa...PostgreSQL-Consulting
Even an experienced PostgreSQL DBA can not always say that upgrading between major versions of Postgres is an easy task, especially if there are some special requirements, such as downtime limitations or if something goes wrong. For less experienced DBAs anything more complex than dump/restore can be frustrating.
In this talk I will describe why we need a special procedure to upgrade between major versions, how that can be achieved and what sort of problems can occur. I will review all possible ways to upgrade your cluster from classical pg_upgrade to old-school slony or modern methods like logical replication. For all approaches, I will give a brief explanation how it works (limited by the scope of this talk of course), examples how to perform upgrade and some advice on potentially problematic steps. Besides I will touch upon such topics as integration of upgrade tools and procedures with other software — connection brokers, operating system package managers, automation tools, etc. This talk would not be complete if I do not cover cases when something goes wrong and how to deal with such cases.
This document provides instructions for transferring data from an older version of the uniCenta oPOS database to the current version 4.5 database format. The transfer process involves 3 main steps:
1. Creating a new database schema and configuring the new uniCenta oPOS version to use it.
2. Connecting the transfer tool to the source database, selecting the database to transfer from.
3. Starting the transfer process, which outputs progress information and status updates. The transferred data is then available in the new database schema.
Here Don goes over some of the benefits of using GIT as well as some of the basic concepts and methods. Later he goes through the workflow of using GIT. Download his slides here or email him at dlee@tagged.com.
1. The document discusses the basic Git workflow of moving changes from a remote repository to a local repository by making commits locally and then pushing those commits to remote.
2. It explains how the staging area is used to prepare commits by adding files before committing and how this allows for calmer review of changes before committing.
3. The process of pushing local commits to a remote repository and pulling remote commits to keep the local repository updated is covered.
This document provides an overview of Git and GitHub. It describes key Git concepts and commands like commit, push, pull, clone, fetch, merge, diff, branching, and .gitignore. It also provides step-by-step instructions for initializing a Git repository, making configurations, adding and committing files, checking out different versions, comparing changes, removing files, pushing changes to remote repositories, cloning repositories, fetching updates, creating and merging branches, and deleting branches. The goal is to explain both the theory and practical usage of version control with Git and GitHub.
Similar to Configure Golden Gate Initial Load and Change Sync (20)
This article describes important Linux commands that you must know as a system or database administrator.
Here is the full article link: https://www.support.dbagenesis.com/post/important-linux-commands
This summary is organized according to the types of processing that can be performed with the Oracle GoldenGate functions.
Here is the full article link: https://www.support.dbagenesis.com/post/oracle-golden-gate-functions
Install Oracle 12c Golden Gate On Oracle LinuxArun Sharma
In this article we will look at the steps to install oracle 12c Golden Gate on Oracle Enterprise Linux 6.5. The steps involved are:
Virtual Machine Setup
Install Oracle 12c Database
Install Oracle 12c Golden Gate
Prepare Golden Gate for Replication
Here is the full link of article: https://www.support.dbagenesis.com/post/install-oracle-12c-golden-gate-on-oracle-linux
TKPROF Stands for Transient Kernel Profiler
It allows you to analyze a trace file to determine where time is being spent
It converts SQL trace files into human readable format. To activate SQL trace for a particular session
Full article link is here: https://www.support.dbagenesis.com/post/oracle-tkprof-utility
When there are lot updates, deletes inside database, it creates lot of empty pockets of space that are not large enough to insert new data. We call this type of empty space as fragmented free space.
Database performance can be impacted by such fragmented space. The process of combining fragmented space into one big free space is known as de-fragmentation.
One of the simplest ways to do it by shrinking table, index segments to reclaim the wasted space. But before you can directly shrink table / index, you must run Oracle segment advisor to get recommendations as to how much space can you reclaim.
In the below activity I will show you how to work with Oracle segment advisor:
Full article link is here: https://www.support.dbagenesis.com/post/oracle-segment-advisor
Oracle 11g Installation With ASM and Data Guard SetupArun Sharma
In this article we will look at Oracle 11g installation with ASM storage and also setup physical standby on ASM.
We will be following below steps for our configuration:
Setup Primary Server
Setup Standby Server
Full article link is here: https://www.support.dbagenesis.com/post/oracle-11g-installation-with-asm-and-data-guard-setup
Oracle 11g to 12c Upgrade With Data Guard and ASMArun Sharma
In this article we will be performing Oracle 11g to 12c database upgrade with data guard and ASM configured.
Below are the steps we are going to follow to perform the database upgrade:
Upgrade GRID_HOME on standby
Upgrade ORACLE_HOME on standby
Upgrade GRID_HOME on primary
Upgrade ORACLE_HOME on primary
Post upgrade steps
Let us start the upgrade process.In this article we will be performing Oracle 11g to 12c database upgrade with data guard and ASM configured.
Below are the steps we are going to follow to perform the database upgrade:
Upgrade GRID_HOME on standby
Upgrade ORACLE_HOME on standby
Upgrade GRID_HOME on primary
Upgrade ORACLE_HOME on primary
Post upgrade steps
Let us start the upgrade process.
Full article link is here: https://www.support.dbagenesis.com/post/oracle-11g-to-12c-upgrade-with-data-guard-asm
This document outlines the steps for performing a rolling upgrade from Oracle 11g to 12c while minimizing downtime. It involves first converting the physical standby database to a logical standby, upgrading the logical standby, and then switching over to it as the new primary database before converting it back to a physical standby and upgrading the original primary database.
Convert Physical Standby Into Logical StandbyArun Sharma
In this article, we will be converting an existing Physical standby into a logical standby.
Note: this article applies to Oracle 12c R2 version
Assumptions: you already have a physical standby configured and data guard broker is enabled.
Enable Fast Start Failover Data Guard BrokerArun Sharma
This document provides instructions for enabling fast-start failover (FSFO) between a primary and standby database. It describes setting the StaticConnectIdentifier parameter on both databases, defining the FastStartFailoverThreshold and FastStartFailoverLagLimit parameters, enabling FSFO, simulating a failure by aborting the primary instance, and checking the logs to confirm automatic failover occurred. It also provides instructions for reinstating the original primary and disabling FSFO.
It’s very simple to perform failover using data guard broker. If primary database is unavailable, we can activate standby using below method. Note, in a failover, we have lost primary.
Full article link is here: https://www.support.dbagenesis.com/post/data-guard-broker-failover
Oracle Data Guard Physical Standby ConfigurationArun Sharma
There are various steps in which you can configure physical standby database. We need to make several changes to the primary database before we can even setup the standby database.
This article applies to Oracle 12c R2 database version
Full link of article is here: https://www.support.dbagenesis.com/post/configure-physical-standby
This document provides instructions for scheduling RMAN backups on Windows using the task scheduler. It involves creating a .cmd file with RMAN commands, a .bat file to call the .cmd file, and then scheduling the .bat file in the Windows task scheduler by defining a trigger such as daily or weekly, and specifying the .bat file location to run. Once scheduled, the task can be tested by running it manually from the task scheduler.
Oracle introduced RMAN backup compression from 10g version onward. From 11g version, Oracle introduced 4 different types of backup compression methods.
Full article link is here: https://www.support.dbagenesis.com/post/rman-backup-compression-types
DBAs, for years, are writing OS level scripts to execute different database related tasks and schedule it via cront tab in Linux. The cron jobs work perfectly well until Oracle released DBMS_SCHEDULER in 10g version.
Note: DBMS_SCHEDULER has introduced many benefits yet, many DBAs still stick to OS level scripting.
Full article link is here: https://www.support.dbagenesis.com/post/scheduling-jobs-with-dbms_scheduler
I will be sharing simple SQL queries that will help you manage Oracle database users.
Article full link is here: https://www.support.dbagenesis.com/post/oracle-user-management
This document discusses two ways to create directories within ASM disk groups in Oracle: using the ASMCMD utility or using SQLPlus connected to the ASM instance. It provides examples of using ASMCMD commands like mkdir to create directories within specific disk groups, and using SQL queries to create multiple directories at the same time within ASM.
we will be creating new ASM diskgroup using SQLPLUS command while connected to ASM instance.
You must have an unused partition / disk on the server that can be used to create ASM diskgroup.
Full article link is here: https://www.support.dbagenesis.com/post/create-diskgroup-sqlplus-command
Get answers to the real time Oracle Golden gate interview questions!
Here is the link for full article: https://www.support.dbagenesis.com/post/oracle-golden-gate-interview-questions
RMAN in data guard configuration works very normal like single standalone database. But, there are few important things you should know in order to define your backup and recovery strategy when using data guard.
Full article link is here: https://www.support.dbagenesis.com/post/rman-in-dataguard-configuration
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
WhatsApp offers simple, reliable, and private messaging and calling services for free worldwide. With end-to-end encryption, your personal messages and calls are secure, ensuring only you and the recipient can access them. Enjoy voice and video calls to stay connected with loved ones or colleagues. Express yourself using stickers, GIFs, or by sharing moments on Status. WhatsApp Business enables global customer outreach, facilitating sales growth and relationship building through showcasing products and services. Stay connected effortlessly with group chats for planning outings with friends or staying updated on family conversations.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Why Mobile App Regression Testing is Critical for Sustained Success_ A Detail...kalichargn70th171
A dynamic process unfolds in the intricate realm of software development, dedicated to crafting and sustaining products that effortlessly address user needs. Amidst vital stages like market analysis and requirement assessments, the heart of software development lies in the meticulous creation and upkeep of source code. Code alterations are inherent, challenging code quality, particularly under stringent deadlines.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Looking for a reliable mobile app development company in Noida? Look no further than Drona Infotech. We specialize in creating customized apps for your business needs.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
2. What is initial load?
Initial load is a process of extracting data records from the
source database and loading those records onto the target
database. Initial load is a data migration process that is
performed only once.
4. On the target, just create the EMP table without any data
into it. Generate the FOX.EMP table DDL command
In the above output, change FOX to TOM and execute the
output of above command on target ggdev.
5. Configure Change Sync
First we will have to configure change sync for FOX.EMP table.
Connect to database via Golden Gate:
6. Add table level supplemental logging via Golden Gate:
Create GG Extract Process:
7. Create local trail file for extract process
Create parameter file for extract process:
15. Add initial load Replicat on target
Edit parameter file for initial load replicat
16. Start Initial Load
& Change Sync
First start the change sync
extract and data pump on
source. This will start
capturing changes while
we perform the initial load.
Do not start replicat at this
point
17. Now start the initial load extract. Remember, this will
automatically start the initial load replicat on target
18. Verify on target if all the 14 records have been loaded on
target table or not
20. Note: At this stage, you can delete the initial load extract and
replicat process as they are no longer needed.
If you get below error while starting the initial load extract:
Add below line to ggdev mgr
GGSCI> refresh mgr
22. Let us create DEPT table from SCOTT.DEPT for FOX user
23. On the target, just create the DEPT table without any data
into it. Generate the FOX.DEPT table DDL command
In the above output, change FOX to TOM and execute the
output of above command on target ggdev.
37. Start Initial Load & Change Sync
First start the change sync extract and data pump on source. This will
start capturing changes while we perform the initial load. Do not start
replicat at this point
38. At this stage capture the database SCN number. We will start the
replicate on target from this SCN onwards
39. Let us make an update into DEPT table. This update will
be captured by both Initial load and also change
capture. Later we will analyze how GG handles
conflicts
40. Now start the initial load extract. Remember, this will
automatically start the initial load replicat on target
41. Verify on target if all the 4 records have been loaded
on target table or not
42. Let us make some changes to DEPT table and if
everything goes well, we must see this change after
starting replicat
43. Now start the change sync replicat
Note: At this stage, you can delete the initial load extract and
replicat process as they are no longer needed.