The document describes IBM DB2's High Availability Disaster Recovery (HADR) multiple standby configuration. It allows a primary database to have one principal standby and up to two auxiliary standbys. The principal standby supports all sync modes, while auxiliary standbys use super async mode. Takeovers can occur from any standby and DB2 will automatically reconfigure other standbys to connect to the new primary if they are in its target list. The document provides details on configuration, initialization, failover behavior and an example deployment across four servers.
Oracle RAC 19c - the Basis for the Autonomous DatabaseMarkus Michalewicz
Oracle Real Application Clusters (RAC) has been Oracle's premier database availability and scalability solution for more than two decades as it provides near linear horizontal scalability without the need to change the application code. This session explains why Oracle RAC 19c is the basis for Oracle's Autonomous Database by introducing some of its latest features, some of which were specifically designed for ATP-D, as well as by taking a peek under the hood of the dedicated Autonomous Database Service (ATP-D).
Oracle Transparent Data Encryption (TDE) 12cNabeel Yoosuf
This presentation provides an introduction to Oracle Transparent Data Encryption technology in 12c. It is provided as part of Oracle Advanced Security.
Using the FLaNK Stack for edge ai (flink, nifi, kafka, kudu)Timothy Spann
Using the FLaNK Stack for edge ai (flink, nifi, kafka, kudu)
ntroducing the FLaNK stack which combines Apache Flink, Apache NiFi, Apache Kafka and Apache Kudu to build fast applications for IoT, AI, rapid ingest.
FLaNK provides a quick set of tools to build applications at any scale for any streaming and IoT use cases.
https://www.flankstack.dev/
Tools
Apache Flink, Apache Kafka, Apache NiFi, MiNiFi, Apache MXNet, Apache Kudu, Apache Impala, Apache HDFS
References
https://www.datainmotion.dev/2019/08/rapid-iot-development-with-cloudera.html
https://www.datainmotion.dev/2019/09/powering-edge-ai-for-sensor-reading.html
https://www.datainmotion.dev/2019/05/dataworks-summit-dc-2019-report.html
https://www.datainmotion.dev/2019/03/using-raspberry-pi-3b-with-apache-nifi.html
Track
Community and Industry Impact
Oracle Active Data Guard: Best Practices and New Features Deep Dive Glen Hawkins
Oracle Data Guard and Oracle Active Data Guard have long been the answer for the real-time protection, availability, and usability of Oracle data. This presentation provides an in-depth look at several key new features that will make your life easier and protect your data in new and more flexible ways. Learn how Oracle Active Data Guard 19c has been integrated with Oracle Database In-Memory and offers a faster application response after a role transition. See how DML can now be redirected from an Oracle Active Data Guard standby to its primary for more flexible data protection in today’s data centers or your data clouds. This technical deep dive on Active Data Guard is designed to give you a glimpse into upcoming new features brought to you by Oracle Development.
The Object Management Group (OMG) Data Distribution Service (DDS) and the OPC Foundation OLE for Process Control Unified Architecture (OPC-UA) are commonly considered as two of the most relevant technologies for data and information management in the Industrial Internet of Things. Although several articles and quotes on the two technologies have appeared on various medias in the past six months, there is still an incredible confusion on how the two technology compare and what’s their applicability.
This presentation, was motivated by the author's frustration with reading and hearing so many mis-conceptions as well as “apple-to-oranges” comparisons. Thus to contribute to clarity and help with positioning and applicability this webcast will (1) explain the key concepts behind DDS and OPC-UA and relate them with the reason why these technologies were created in the first place, (2) clarify the differences and applicability in IoT for DDS and OPC-UA, and (3) report on the ongoing standardisation activities that are looking at DDS/OPC-UA inter-working.
Oracle RAC 19c - the Basis for the Autonomous DatabaseMarkus Michalewicz
Oracle Real Application Clusters (RAC) has been Oracle's premier database availability and scalability solution for more than two decades as it provides near linear horizontal scalability without the need to change the application code. This session explains why Oracle RAC 19c is the basis for Oracle's Autonomous Database by introducing some of its latest features, some of which were specifically designed for ATP-D, as well as by taking a peek under the hood of the dedicated Autonomous Database Service (ATP-D).
Oracle Transparent Data Encryption (TDE) 12cNabeel Yoosuf
This presentation provides an introduction to Oracle Transparent Data Encryption technology in 12c. It is provided as part of Oracle Advanced Security.
Using the FLaNK Stack for edge ai (flink, nifi, kafka, kudu)Timothy Spann
Using the FLaNK Stack for edge ai (flink, nifi, kafka, kudu)
ntroducing the FLaNK stack which combines Apache Flink, Apache NiFi, Apache Kafka and Apache Kudu to build fast applications for IoT, AI, rapid ingest.
FLaNK provides a quick set of tools to build applications at any scale for any streaming and IoT use cases.
https://www.flankstack.dev/
Tools
Apache Flink, Apache Kafka, Apache NiFi, MiNiFi, Apache MXNet, Apache Kudu, Apache Impala, Apache HDFS
References
https://www.datainmotion.dev/2019/08/rapid-iot-development-with-cloudera.html
https://www.datainmotion.dev/2019/09/powering-edge-ai-for-sensor-reading.html
https://www.datainmotion.dev/2019/05/dataworks-summit-dc-2019-report.html
https://www.datainmotion.dev/2019/03/using-raspberry-pi-3b-with-apache-nifi.html
Track
Community and Industry Impact
Oracle Active Data Guard: Best Practices and New Features Deep Dive Glen Hawkins
Oracle Data Guard and Oracle Active Data Guard have long been the answer for the real-time protection, availability, and usability of Oracle data. This presentation provides an in-depth look at several key new features that will make your life easier and protect your data in new and more flexible ways. Learn how Oracle Active Data Guard 19c has been integrated with Oracle Database In-Memory and offers a faster application response after a role transition. See how DML can now be redirected from an Oracle Active Data Guard standby to its primary for more flexible data protection in today’s data centers or your data clouds. This technical deep dive on Active Data Guard is designed to give you a glimpse into upcoming new features brought to you by Oracle Development.
The Object Management Group (OMG) Data Distribution Service (DDS) and the OPC Foundation OLE for Process Control Unified Architecture (OPC-UA) are commonly considered as two of the most relevant technologies for data and information management in the Industrial Internet of Things. Although several articles and quotes on the two technologies have appeared on various medias in the past six months, there is still an incredible confusion on how the two technology compare and what’s their applicability.
This presentation, was motivated by the author's frustration with reading and hearing so many mis-conceptions as well as “apple-to-oranges” comparisons. Thus to contribute to clarity and help with positioning and applicability this webcast will (1) explain the key concepts behind DDS and OPC-UA and relate them with the reason why these technologies were created in the first place, (2) clarify the differences and applicability in IoT for DDS and OPC-UA, and (3) report on the ongoing standardisation activities that are looking at DDS/OPC-UA inter-working.
Presented at MQ Technical Conference 2018
Several businesses are now moving to implement new or existing infrastructures in containers rather than traditional on-prem or virtual machine environments. In this session we will talk about the benefits of containers and show how IBM MQ can be ran in a container. Providing an example and sample of how you can get started running IBM MQ in a container.
View this presentation to get an overview of the SAP NetWeaver Business Warehouse, powered by SAP HANA and learn more about it's IT and business benefits.
Oracle Data Guard ensures high availability, disaster recovery and data protection for enterprise data. This enable production Oracle databases to survive disasters and data corruptions. Oracle 18c and 19c offers many new features it will bring many advantages to organization.
This presentation covers all aspects of PostgreSQL administration, including installation, security, file structure, configuration, reporting, backup, daily maintenance, monitoring activity, disk space computations, and disaster recovery. It shows how to control host connectivity, configure the server, find the query being run by each session, and find the disk space used by each database.
a striped down Version of a presentation about oracle architecture. Goal was a basic understanding and foundation about some components of Oracle, so subsequent discussions should be easier
Pg_upgrade allows data to be transferred between major Postgres versions without a costly dump/restore. This occurs by transferring the user data and version-dependent data separately. This presentation explains the internal workings of pg_upgrade and includes a pg_upgrade demonstration.
To listen to the recording please visit www.EnterpriseDB.com > Resources > Webcasts > On-demand webcasts
For more information about Postgres Plus Advanced Server you can email sales@enterprisedb.com
The Top 5 Reasons to Deploy Your Applications on Oracle RACMarkus Michalewicz
A presentation for developers, DBAs, and managers. This presentation was first presented in course of the AIOUG Maximum Availability Architecture (MAA)-focus month August 2021. The first reason might surprise you!
"Extended" or "Stretched" Oracle RAC has been available as a concept for a while. Oracle RAC 12c Release 2 introduces an Oracle Extended Cluster configuration, in which the cluster understands the concept of sites and extended setups. This knowledge is used to more efficiently manage "Extended Oracle RAC", whether the nodes are 0.1 mile or 10 miles apart.
The presentation was last updated on August 7th 2017 to add a reference to the new MAA White Paper: "Installing Oracle Extended Clusters on Exadata Database Machine" - http://www.oracle.com/technetwork/database/availability/maa-extclusters-installguide-3748227.pdf and to correct some minor details.
Presented at MQ Technical Conference 2018
Several businesses are now moving to implement new or existing infrastructures in containers rather than traditional on-prem or virtual machine environments. In this session we will talk about the benefits of containers and show how IBM MQ can be ran in a container. Providing an example and sample of how you can get started running IBM MQ in a container.
View this presentation to get an overview of the SAP NetWeaver Business Warehouse, powered by SAP HANA and learn more about it's IT and business benefits.
Oracle Data Guard ensures high availability, disaster recovery and data protection for enterprise data. This enable production Oracle databases to survive disasters and data corruptions. Oracle 18c and 19c offers many new features it will bring many advantages to organization.
This presentation covers all aspects of PostgreSQL administration, including installation, security, file structure, configuration, reporting, backup, daily maintenance, monitoring activity, disk space computations, and disaster recovery. It shows how to control host connectivity, configure the server, find the query being run by each session, and find the disk space used by each database.
a striped down Version of a presentation about oracle architecture. Goal was a basic understanding and foundation about some components of Oracle, so subsequent discussions should be easier
Pg_upgrade allows data to be transferred between major Postgres versions without a costly dump/restore. This occurs by transferring the user data and version-dependent data separately. This presentation explains the internal workings of pg_upgrade and includes a pg_upgrade demonstration.
To listen to the recording please visit www.EnterpriseDB.com > Resources > Webcasts > On-demand webcasts
For more information about Postgres Plus Advanced Server you can email sales@enterprisedb.com
The Top 5 Reasons to Deploy Your Applications on Oracle RACMarkus Michalewicz
A presentation for developers, DBAs, and managers. This presentation was first presented in course of the AIOUG Maximum Availability Architecture (MAA)-focus month August 2021. The first reason might surprise you!
"Extended" or "Stretched" Oracle RAC has been available as a concept for a while. Oracle RAC 12c Release 2 introduces an Oracle Extended Cluster configuration, in which the cluster understands the concept of sites and extended setups. This knowledge is used to more efficiently manage "Extended Oracle RAC", whether the nodes are 0.1 mile or 10 miles apart.
The presentation was last updated on August 7th 2017 to add a reference to the new MAA White Paper: "Installing Oracle Extended Clusters on Exadata Database Machine" - http://www.oracle.com/technetwork/database/availability/maa-extclusters-installguide-3748227.pdf and to correct some minor details.
Various HA and DR setups for Postgres Plus Advanced Server -
Active – Passive OS HA Clustering
Log Shipping Replication (Hot Standby Mode)
Hot Streaming Replication (Hot Standby Mode)
EDB Postgres Plus Failover Manager
HA with read scaling (with pg-pool)
xDB Single Master Replication (SMR)
xDB Multi Master Replication (MMR)
Use Cases
Oracle 11g Installation With ASM and Data Guard SetupArun Sharma
In this article we will look at Oracle 11g installation with ASM storage and also setup physical standby on ASM.
We will be following below steps for our configuration:
Setup Primary Server
Setup Standby Server
Full article link is here: https://www.support.dbagenesis.com/post/oracle-11g-installation-with-asm-and-data-guard-setup
Oracle Data Guard Physical Standby ConfigurationArun Sharma
There are various steps in which you can configure physical standby database. We need to make several changes to the primary database before we can even setup the standby database.
This article applies to Oracle 12c R2 database version
Full link of article is here: https://www.support.dbagenesis.com/post/configure-physical-standby
Seamless replication and disaster recovery for Apache Hive WarehouseDataWorks Summit
As Apache Hadoop clusters become central to an organization’s operations, they have clusters in more than one data center. Historically, this has been largely driven by requirements of business continuity planning or geo localization. It has also recently been gaining a lot of interest from a hybrid cloud perspective, i.e. wherein people are trying to augment their traditional on-prem setup with cloud-based additions as well. A robust replication solution is a fundamental requirement in such cases.
Seamless disaster recovery has several challenges. Data, metadata, and transaction information need to be moved in sync. It should also be easy for the users and applications to reason about the state of the replica. The “hadoop scale” also brings unique challenges as bandwidth between clusters can be a limiting factor. The data transfer has to be minimized for replication, failover, as well as fail back scenarios.
In this talk we will discuss how the above challenges are addressed for supporting seamless replication and disaster recovery for Hive.
Speakers
Sankar Hariappan, Hortonworks, Staff Software Engineer
Anishek Agarwal, Hortonworks, Engineering Manager
Seamless Replication and Disaster Recovery for Apache Hive WarehouseSankar H
As Apache Hadoop clusters become central to an organization’s operations, they have clusters in more than one data center. Historically, this has been largely driven by requirements of business continuity planning or geo localization. It has also recently been gaining a lot of interest from a hybrid cloud perspective, i.e. wherein people are trying to augment their traditional on-prem setup with cloud-based additions as well. A robust replication solution is a fundamental requirement in such cases.
Seamless disaster recovery has several challenges. Data, metadata, and transaction information need to be moved in sync. It should also be easy for the users and applications to reason about the state of the replica. The “hadoop scale” also brings unique challenges as bandwidth between clusters can be a limiting factor. The data transfer has to be minimized for replication, failover, as well as fail back scenarios.
In this talk we will discuss how the above challenges are addressed for supporting seamless replication and disaster recovery for Hive.
Introduction to HBase. HBase is a NoSQL databases which experienced a tremendous increase in popularity during the last years. Large companies like Facebook, LinkedIn, Foursquare are using HBase. In this presentation we will address questions like: what is HBase?, and compared to relational databases?, what is the architecture?, how does HBase work?, what about the schema design?, what about the IT ressources?. Questions that should help you consider whether this solution might be suitable in your case.
PuppetConf 2016: An Introduction to Measuring and Tuning PE Performance – Cha...Puppet
Here are the slides from Charlie Sharpsteen's PuppetConf 2016 presentation called An Introduction to Measuring and Tuning PE Performance. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
In this session you will learn:
What is Big Data?
What is Hadoop?
Overview of Hadoop Ecosystem
Hadoop Distributed File System or HDFS
Hadoop Cluster Modes
Yarn
MapReduce
Hive
Pig
Zookeeper
Flume
Sqoop
For more information, visit: https://www.mindsmapped.com/courses/big-data-hadoop/hadoop-developer-training-a-step-by-step-tutorial/
A powerful feature in Postgres called Foreign Data Wrappers lets end users integrate data from MongoDB, Hadoop and other solutions with their Postgres database and leverage it as single, seamless database using SQL.
Use of these features has skyrocketed since EDB released to the open source community new FDWs for MongoDB, Hadoop and MySQL that support both read and write capabilities. Now greatly enhanced, FDWs enable integrating data across disparate deployments to support new workloads, expanded development goals and harvesting greater value from data.
Target Audience: This presentation is intended for IT Professionals seeking to do more with Postgres in his every day projects and build new applications.
1. HADR Update: Multiple Standby
p p y
Support
Dale McInnis
IBM Canada Ltd.
Session Code: H08
Wed. May 16: 2:45 – 3:45 pm| Platform: Linux, UNIX, Windows
1
2. Click to edit Master title style
HADR Multiple Standby Overview
Primary
super async mode only Auxiliary
Standby
ode
any sync mo
Principal
Standby
y
Auxiliary
Standby
2
3. Click to edit Master title style
HADR Multiple Standby Features
• Principal Standby (PS) equivalent to standby today
• PS supports any sync mode
• Can automate takeover using integrated TSA
• Support for up to two(2) Auxiliary Standbys (AS)
• AS supports super async mode only
• No automated takeover supported
• Always feed from the current primary
• Can be added dynamically
3
4. Click to edit Master title style
HADR Multiple Standby Enablement
• Principal Standby (PS) is specified via the HADR_REMOTE_HOST,
HADR_REMOTE_SVC, HADR_REMOTE_INST
• HADR_TARGET_LIST is used to specify all standbys, both auxiliary as well as the
principal standby
• HADR_TARGET_LIST uses a hostname or IP Address and port number format with
the “|” character as a delimiter
• E.g. host1.ibm.com:4000|host2.ibm.com:hadr_service|9.47.73.34:5000
• On each standby the HADR_REMOTE_HOST, HADR_REMOTE_INST,
HADR_REMOTE_SVC
HADR REMOTE SVC must point to the current primary
• Primary will validate hostname and port number upon handshake from AS
• Existing single standby installations need no configuration change
4
5. Click to edit Master title style
Multiple Standby Configuration Setup
On each node (primary and all standbys) set the local configuration information
"UPDATE DB CFG FOR dbname USING
UPDATE
HADR_LOCAL_HOST hostname
HADR_LOCAL_SVC servicename
HADR_SYNCMODE syncmode
HADR SYNCMODE syncmode“
Set the hadr_target_list configuration parameter on all of the standbys and the
p
primary.
y
“DB2 UPDATE DB CFG FOR dbname USING
HADR_TARGET_LIST principalhostname:principalservicename|
auxhostname1:auxservicename1|auxhostname2:auxservicename2”
These values are to be set with respect to how the cluster should appear if this
node became the primary
5
6. Click to edit Master title style
Multiple Standby Configuration Setup
Optional step:
• On the primary, set the parameters to the corresponding values on the
principal standby by issuing the following command:
DB2 "UPDATE DB CFG FOR dbname USING
HADR_REMOTE_HOST principalhostname
_ _ p p
HADR_REMOTE_SVC principalservicename
HADR_REMOTE_INST principalinstname"
• On each standby, set the parameters to the corresponding values on the
y, p p g
primary by issuing the following command:
DB2 "UPDATE DB CFG FOR dbname USING
HADR_REMOTE_HOST primaryhostname
HADR_REMOTE_SVC primaryservicename
HADR_REMOTE_INST primaryinstname"
6
7. Click to edit Master title style
Automatic reconfiguration of HADR Parameters
Reconfiguration after HADR starts
Configuration parameters that identify the primary database for the standbys and
identify the principal standby for the p
y p p y primary are automatically reset when
y y
HADR starts if you did not correctly set them. This behavior applies to the
following configuration parameters:
• hadr_remote_host
• hadr_remote_inst
• hadr_remote_svc
Reconfiguration during and after a takeover
After a forced or non-forced takeover, the values for the hadr_remote_host,
hadr_remote_inst, and hadr_remote_svc configuration parameters are
updated automatically on all the databases that are potentially a part of the
new setup. Any database that is not a valid standby for the new primary,
because they are not included in each other’s target list, is not updated. If you
want to include a database as a standby, you must ensure that it is in the
target list of the primary and that the primary is in the target list of the new
standby database. Otherwise, the standby database waits for the old primary
7 to restart as a primary.
8. Click to edit Master title style
Takeover Behavior in Multiple Standby
Environment
Two types of takeovers as in traditional HADR
• Role Switch
• sometimes called graceful takeover or non-forced takeover, can be
performed only when the primary is available and it switches the role of
primary and standby Provides zero data loss guarantee
• Failover
• can be performed when the primary is not available. It is commonly used
in primary failure cases to make the standby the new primary. The old
primary remains in primary role in a forced takeover.
• Both types of takeover are supported in multiple standby mode, and any of
the standby databases can take over as the primary. A crucial thing to
remember, though,
remember though is that if a standby is not included in the new primary's
primary s
target list, it is considered to be orphaned and cannot connect to the new
primary.
8
9. Click to edit Master title style
Takeover Behavior in Multiple Standby
Environment
• Takeover (forced and non-forced) is supported on all standbys
• After a takeover, DB2 auto-redirects and makes necessary configuration
changes (host/service/instance name of new primary) on the standbys that
are in the new primary's target list (and vice versa)
• Standbys not in new p
y primary's target list (
y g (and vice versa) are “orphaned”
) p
standbys
• Data loss (usually from a failover) complicates the picture:
• If old primary has more data than new primary, it cannot be reintegrated
without being reinitialized
• If a standby has more data than new primary, it will not pass pair
validation check and cannot b
lid ti h k d t become a standby f th t primary
t db for that i
• To avoid the latter, check the log positions of all standbys and fail over to
the one with the most data
9
10. Click to edit Master title style
Takeover Behavior in Multiple Standby
Environment
DB2 automatically makes a number of configuration changes for you so that the
standbys listed in new primary's target list can connect to the new primary.
The hadr_remote_host, hadr_remote_svc, and hadr_remote_inst
configuration parameters are updated on the new primary and listed standbys
in the following way
• On the new primary: They refer to the principal standby (the first database
listed in the new primary’s target list).
p y g )
• On the standbys: They refer to the new primary. When an old primary is
reintegrated to become standby, the START HADR AS STANDBY command
first converts it to a standby. Thus it can also be automatically redirected to
the new primary if it is listed in the target list of the new primary.
• Orphaned standbys are not automatically updated i this way. If you want
O h d db i ll d d in hi
them to join as standbys, you need to ensure they are in the new primary’s
target list and that they include the new primary in their target lists.
10
11. Click to edit Master title style
Takeover B h i in Multiple Standby
T k Behavior i M lti l St db
Environment
• Role switch
• Just as in single standby mode, role switch in multiple standby mode
guarantees no data is lost between the old primary and new primary.
Other standbys configured in the new primary’s hadr_target_list
configuration parameter are automatically redirected to the new primary
and continue receiving logs.
• Failover
• Just as in single standby mode, if a failover results in any data loss in
multiple standby mode (meaning that the new primary does not have all
of the data of the old primary), the old and new p
p y), primary's log streams
y g
diverge and the old primary has to be reinitialized. For the other standbys,
if a standby received logs from the old primary beyond the diverge point,
it has to be reinitialized. Otherwise, it can connect to the new primary and
continue log shipping and replay As a result it is very important that you
replay. result,
check the log positions of all of the standbys and choose the standby with
the most data as the failover target. You can query this information using
11 the db2pd command or the MON_GET_HADR table function.
12. Click to edit Master title style
Multiple St db R t i ti
M lti l Standby Restrictions
• You can have a maximum of three standby databases: one principal standby
and one or two auxiliary standbys
y y
• Only the principal standby supports all the HADR synchronization modes; all
auxiliary standbys will be in SUPERASYNC mode
• IBM Tivoli System Automation for Multiplatforms (SA MP) support applies
only between the primary HADR database and its principal standby
• You must set the hadr_target_list database configuration parameter on
all the databases in the multiple standby setup. In addition, for each
combination of primary and standby role switch between those databases
standby,
must be allowed. That is, each database in the target list of a particular
database must also have that particular database in its target list.
12
13. Click to edit Master title style
HADR Multiple Standby Example
Primary Auxiliary standby
Host name: host1 SUPERASYNC
Hostname: host3
Port: 10
Port: 41
Instance name: dbinst1
Instance name: dbinst3
SYNC
Principal standby Auxiliary standby
Hostname: host2 Hostname: host4
Port: 40 Port: 42
Instance name: dbinst2 Instance name: dbinst4
City A City B
13
14. Click to edit Master title style
Multiple Standby Example
Intended role Host name Port # Instance name
Primary host1 10 dbinst1
Principal standby host2 40 dbinst1
Auxiliary standby host3 41 dbinst3
Auxiliary standby host4 42 dbinst4
14
15. Click to edit Master title style
Initial Setup
Step 1: Create a backup image:
On host1 (primary)
DB2 BACKUP DB HADR DB t / f /db b k
HADR_DB to /nfs/db_backup
Step 2: Initialize the standbys:
on each of host2, host3, and host 4
f
DB2 RESTORE DB HADR_DB from /nfs/db_backup
Step 3: Update the configuration
The HADR_LOCAL_HOST, HADR_LOCAL_SVC and HADR_SYNCMODE
must be set on all nodes
tb t ll d
the HADR_TARGET_LIST should be set to the following on all nodes:
PrincipalHost:Principalsvc | auxhost1:auxsvc1 | auxhost2:auxsvc2
P i i lH t P i i l h t1 1 h t2 2
15
16. Click to edit Master title style
Initial Setup
Step 4: Update the configuration optional step
The following are not required to be set in a multiple standby environment as they will be
automatically set:
• hadr_remote_host
• hadr_remote_svc
• hadr_remote_inst
On the primary set the following:
HADR_REMOTE_HOST
HADR REMOTE HOST = principalhostname
HADR_REMOTE_SVC = principalservicename
HADR_REMOTE_INST = principalinstancename
On each standby set the following:
HADR_REMOTE_HOST = primaryhostname
HADR_REMOTE_SVC = primaryservicename
HADR_REMOTE_INST = primaryinstancename
16
17. Click to edit Master title style
Initial Setup
On host1 (the primary):
DB2 "UPDATE DB CFG FOR hadr_db USING
HADR_TARGET_LIST host2:40|host3:41|host4:42
HADR_REMOTE_HOST host2
HADR_REMOTE_SVC 40
HADR_LOCAL_HOST host1
HADR_LOCAL_SVC 10
HADR_SYNCMODE sync
y
HADR_REMOTE_INST db2inst2"
17
18. Click to edit Master title style
Initial Setup con’t
con t
On host2 (the principal standby):
DB2 "UPDATE DB CFG FOR hadr_db USING
HADR_TARGET_LIST host1:10|host3:41|host4:42
HADR TARGET LIST h t1 10|h t3 41|h t4 42
HADR_REMOTE_HOST host1
HADR_REMOTE_SVC 10
HADR_LOCAL_HOST host2
HADR_LOCAL_SVC 40
HADR_SYNCMODE sync
HADR_REMOTE_INST db2inst1“
On host3 (an auxiliary standby):
DB2 "UPDATE DB CFG FOR hadr_db USING
HADR_TARGET_LIST host2:40|host1:10|host4:42
HADR_REMOTE_HOST host1
HADR_REMOTE_SVC 10
HADR_LOCAL_HOST host3
HADR_LOCAL_SVC 41
HADR_SYNCMODE superasync
HADR_REMOTE_INST db2inst1“
18
19. Click to edit Master title style
Initial Setup con’t
con t
On host4 (an auxiliary standby):
DB2 "UPDATE DB CFG FOR hadr db USING
UPDATE hadr_db
HADR_TARGET_LIST host2:40|host1:10|host3:41
HADR_REMOTE_HOST host2
HADR_REMOTE_SVC 10
HADR_LOCAL_HOST
HADR LOCAL HOST host4
HADR_LOCAL_SVC 42
24 hour delay
HADR_SYNCMODE superasync
HADR_REMOTE_INST db2inst1
HADR_REPLAY_DELAY
HADR REPLAY DELAY 86400"
19
20. Click to edit Master title style
Starting HADR
Starting the HADR databases
The DBA starts the standby databases first, by issuing the
following command on each of host2, host3, and host 4:
DB2 START HADR ON DB hadr_db AS STANDBY
Next, the DBA starts HADR on the primary database, on
host1:
h t1
DB2 START HADR ON DB hadr_db AS PRIMARY
20
21. Click to edit Master title style
HADR Multiple Standby Example
Primary Auxiliary standby
Host name: host1 SUPERASYNC
Hostname: host3
Port: 10
Port: 41
Instance name: dbinst1
Instance name: dbinst3
SYNC
Principal standby Auxiliary standby
Hostname: host2 Hostname: host4
Port: 40 Port: 42
Instance name: dbinst2 Instance name: dbinst4
City A City B
21
22. Click to edit Master title style
Configuration values for each host
Configuration Host1 Host2 Host3 Host4
parameter
Hadr_target_list
H d t t li t host2:40
h t2 40 | host1:10
h t1 10 | host2:20
h t2 20 | host2:40
h t2 40 |
host3:41 | host3:41 | host1:10 | host1:10 |
host4:42 host4:42 host4:42 host3:41
Hadr_remote_host host2 host1 host1 host1
Hadr_remote_svc 40 10 10 10
Hadr_remote_inst
Hadr remote inst dbinst2 dbinst1 dbinst1 dbinst1
Hadr_local_host host1 host2 host3 host4
Hadr_local_svc 10 40 41 42
Operational sync sync superasync superasync
Hadr_syncmode
Effective N/A sync superasync superasync
Hadr_syncmode
22
23. Click to edit Master title style
HADR Multiple Standby Role Reversal Example
Principal standby
Primary Auxiliary standby
Host name: host1
Hostname: host3
Port: 10
Port: 41
Instance name: dbinst1
Instance name: dbinst3
SYNC
Primary standby
Principal Auxiliary standby
Takeover
Hostname: host2 Hostname: host4
Port: 40 Port: 42
Instance name: dbinst2 Instance name: dbinst4
SUPERASYNC
City A City B
23
24. Click to edit Master title style
After i
Aft issuing takeover on host2 ( t reconfigured)
i t k h t2 (auto fi d)
Configuration Host1 Host2 Host3 Host4
parameter
Hadr_target_list host2:40 | host1:10 | host2:40 | host2:40 |
host3:41 | host3:41 | host1:10 | host1:10 |
host4:42 host4:42 host4:42 host3:41
Hadr_remote_host
Hadr remote host host2 host1 host2 host2
Hadr_remote_svc 40 10 40 40
Hadr_remote_inst dbinst2 dbinst1 dbinst2 dbinst2
Hadr_local_host host1 host2 host3 host4
Hadr_local_svc 10 40 41 42
Operational sync sync superasync superasync
Hadr_syncmode
y
Effective sync N/A superasync superasync
Hadr_syncmode
24
25. Click to edit Master title style
HADR Multiple Standby Forced Takeover Example
Primary Auxiliary standby
Primary
Pi
Host name: host1
Hostname: host3
Port: 10
Takeover
Port: 41
Instance name: dbinst1
Instance name: dbinst3
SUPERA
ASYNC
Principal standby
Auxiliary standby
Hostname: host2 Hostname: host4
Port: 40 Port: 42
Instance name: dbinst2 Instance name: dbinst4
City A City B
25
29. Click to edit Master title style
HADR Multiple Standby Example
Auxiliary standby
Primary
Host name: host1
Hostname: host3
Port: 10
Port: 41
Instance name: dbinst1
Instance name: dbinst3
SUPERA
ASYNC
Principal standby Auxiliary standby
Hostname: host2 Hostname: host4
Port: 40 Port: 42
Instance name: dbinst2 Instance name: dbinst4
City A City B
29
30. Click to edit Master title style
After issuing takeover on host3 (host 1 online)
Configuration Host1 Host2 Host3 Host4
parameter
Hadr_target_list
H d t t li t host2:40
h t2 40 | host1:10
h t1 10 | host2:40
h t2 40 | host2:40
h t2 40 |
host3:41 | host3:41 | host1:10 | host1:10 |
host4:42 host4:42 host4:42 host3:41
Hadr_remote_host host3 host3 host2 host3
Hadr_remote_svc 41 41 40 41
Hadr_remote_inst
Hadr remote inst dbinst3 dbinst3 dbinst2 dbinst3
Hadr_local_host host1 host2 host3 host4
Hadr_local_svc 10 40 41 42
Operational sync sync superasync superasync
Hadr_syncmode
Effective superasync superasync n/a superasync
Hadr_syncmode
30
31. Click to edit Master title style
Software upgrades in a Multi Standby
Environment
The procedure is essentially the same as with single standby mode, except you should
perform the upgrade on one database at a time and starting with an auxiliary standby.
For example, consider the following HADR setup:
• host1 is the primary
• host2 is the principal standby
• host 3 is the auxiliary standby
For this setup, perform the rolling upgrade or update according to the following
sequence:
1. Deactivate host3, make the required changes, activate host3, and start HADR
on host3 (as a standby).
2. After host3 is caught up in log replay, deactivate host2, make the required
changes, activate host2, and start HADR on host2 (as a standby).
g , , ( y)
3. After host2 is caught up in log replay and in peer state with host1, issue a
takeover on host2.
4. Deactivate host1, make the required changes, activate host1, and start HADR
on host1 (as a standby)
standby).
5. After host1 is in peer state with host 2, issue a takeover on host1 so that it
becomes the primary again and host2 becomes the principal standby again.
31
32. Click to edit Master title style
HADR Configuration Parameters
hadr_db_role
This
Thi parameter i di t th current role of a d t b
t indicates the t l f database; V lid values are: STANDARD
Valid l STANDARD,
PRIMARY, or STANDBY.
hadr_local_host
specifies the local host for high availability disaster recovery (HADR) TCP
communication.
i ti
hadr_local_svc
specifies the TCP service name or port number for which the local high availability
disaster recovery (HADR) process accepts connections.
hadr_peer_window
h d i d
When you set hadr_peer_window to a non-zero time value, then a HADR primary-
standby database pair continues to behave as though still in peer state, for the
configured amount of time, if the primary database loses connection with the standby
database. Thi h l ensure d t consistency
d t b This helps data i t
hadr_remote_host
specifies the TCP/IP host name or IP address of the remote high availability disaster
recovery (HADR) database server
32
33. Click to edit Master title style
HADR Configuration Parameters
hadr_remote_inst
specifies the instance name of the remote server. High availability disaster
recovery (HADR) also checks whether a remote database requesting a
connection belongs to the declared remote instance.
hadr_remote_svc
hadr remote svc
specifies the TCP service name or port number that will be used by the
remote high availability disaster recovery (HADR) database server.
hadr_replay_delay
specifies th number of seconds th t must pass f
ifi the b f d that t from th ti
the time th t a
that
transaction is committed on the primary database to the time that the
transaction is committed on the standby database
hadr_spool_limit
p
determines the maximum amount of log data that is allowed to be spooled to
disk on HADR standby
hadr_syncmode
specifies the synchronization mode which determines how primary log writes
mode,
are synchronized with the standby when the systems are in peer state
33
34. Click to edit Master title style
HADR Configuration Parameters
hadr_target_list
This parameter, which enables HADR to run in multiple standby
parameter hich r n m ltiple standb
mode, specifies a list of up to three target host:port pairs that act as
HADR standby databases.
hadr_timeout
hadr timeout
specifies the time (in seconds) that the high availability disaster
recovery (HADR) process waits before considering a communication
attempt to have failed
Blocknonlogged
specifies whether the database manager will allow tables to have the
NOT LOGGED or NOT LOGGED INITIALLY attributes activated.
Logindexbuild
specifies whether index creation, recreation, or reorganization
operations are to be logged so that indexes can be reconstructed
during
d i DB2 rollforward operations or hi h availability di
llf d i high il bili disaster
recovery (HADR) log replay procedures.
34
35. Click to edit Master title style
HADR Configuration Parameters Updates
• you need only stop and start HADR for updates to some HADR configuration
p
parameters for the p primary database to take effect. You do not have to
y
deactivate and reactivate the database. This dynamic capability affects only
the primary database because stopping HADR deactivates any standby
database.
• Th affected configuration parameters are as follows:
The ff t d fi ti t f ll
• hadr_local_host
• hadr_local_svc
• h d
hadr_peer_windowi d
• hadr_remote_host
• hadr_remote_inst
• h d
hadr_remote_svc
t
• hadr_replay_delay
• hadr_spool_limit
• h d
hadr_syncmode d
• hadr_target_list
35 • hadr_timeout
36. Click to edit Master title style
HADR Monitoring Changes
There are two preferred ways of monitoring HADR:
• The db2pd command
• The MON_GET_HADR table function
• From the primary = information about the primary and all standbys
• From a standby = information about that standby and the primary
You can also use the following methods, but have been deprecated
g , p
and may be removed in a future release:
The GET SNAPSHOT FOR DATABASE command
• The db2GetSnapshot API
• The SNAPHADR administrative view
• The SNAP_GET_HADR table function
• Oth snapshot administrative views and t bl f
Other h t d i i t ti i d table functions
ti
36
37. Click to edit Master title style
DB2 PD Changes – one entry for each primary-
standby pair
db2pd -db HADRDB -hadr
Database Member 0 -- Database HADRDB -- Active -- Up 0 days 00:23:17 -- Date 06/08/2011
HADR_ROLE = PRIMARY
REPLAY_TYPE = PHYSICAL
HADR_SYNCMODE = SYNC
STANDBY_ID = 1
LOG_STREAM_ID = 0
HADR_STATE = PEER
PRIMARY_MEMBER_HOST hostP.ibm.com
PRIMARY MEMBER HOST = hostP ibm com
PRIMARY_INSTANCE = db2inst
PRIMARY_MEMBER = 0
STANDBY_MEMBER_HOST = hostS1.ibm.com
STANDBY_INSTANCE
STANDBY INSTANCE = db2inst
STANDBY_MEMBER = 0
HADR_CONNECT_STATUS = CONNECTED
HADR_CONNECT_STATUS_TIME = 06/08/2011 13:38:10.199479 (1307565490)
HEARTBEAT_INTERVAL(seconds) = 25
HEARTBEAT INTERVAL( d )
HADR_TIMEOUT(seconds) = 100
TIME_SINCE_LAST_RECV(seconds) = 3
37PEER_WAIT_LIMIT(seconds) = 0
39. Click to edit Master title style
Monitoring St db S
M it i Standby Servers
db2pd command
This command retrieves information from the DB2 memory sets. You can
issue this command from either a primary database or a standby database.
p y y
If you are using multiple standby mode and you issue this command from
a standby, it does not return any information about the other standbys.
MON_GET_HADR table function
If you want to issue the MON_GET_HADR function against a standby
database, be aware of the following p
, g points:
• You must enable reads on standby on the standby.
• Even if your HADR setup is in multiple standby mode, the table function does
not return any information about any other standbys.
39
40. Click to edit Master title style
High Availability Positioning
DB2 Integrated Clustering HADR PureScale
• Hot / Cold • Hot/Warm (optional hot with • Hot/Hot
• Basic availability solution read on standby option
d t db ti • Online failover
• Free is most cases • Provides very fast failover, • Transactions on surviving
• Provides failover typically less typically less than 1 minute nodes are not impacted
than 1 or 2 minutes • Easy to setup with turnkey • Provides the fastest failover,
availability typically less than 30 seconds
• Minimal licensing on the • Online scaleout capability to
standby server address workload spikes
• Basic reporting (DB2 v9.7
FP1) requires full standby
licensing
• Zero data loss option
40
41. Click to edit Master title style
Disaster Recovery Positioning
Q Replication
Log Shipping •Hot / Hot
• Host/Cold •Unrestricted access to the target
g HADR
• No access to the target DB DB Hot/Warm (optional hot with read on
• Target DB always at least 1 full log •Supports different versions of standby option
file behind DB2 on source and target Provides very fast failover, typically
• Must complete rollforward before •Supports sub setting of data less than 1 minute
bringing target DB online •Multiple target support Easy to setup with turnkey availability
• Full DB replication only •Online f il
O li failover Minimal licensing on the standby
Mi i l li i h db
• Requires all operations be logged •Can replicate to a different server
topology Basic reporting (DB2 v9.7 FP1)
• DDL and DML replication support requires full standby licensing
• Async only •Async only Zero data loss option
• Easy to setup •Complex to setup
Complex
•No DDL support
41
42. Dale McInnis
IBM Canada Ltd.
dmcinnis@ca.ibm.com
@
Session H08
HADR Update: Multiple Standby Support
42