The document discusses Oracle Data Guard Broker and managing client connectivity in a Data Guard configuration. Some key points:
- The Data Guard Broker automates configuration and monitoring of Data Guard, allowing management of an entire configuration from a single interface.
- It supports primary and standby databases. Services like redo transport and log apply are managed.
- Database services direct client connections to the correct database instance. A trigger ensures clients connect to the primary or standby as appropriate.
- Role-based services started by the trigger allow applications to fail over automatically to a new primary without code changes, using Fast Application Notification.
Oracle Data Guard ensures high availability, disaster recovery and data protection for enterprise data. This enable production Oracle databases to survive disasters and data corruptions. Oracle 18c and 19c offers many new features it will bring many advantages to organization.
Oracle 12c offers many new features and upgrading database can bring many advantages to organization. There are various upgrade and migration methods available and the best method to use for your upgrade/migration scenario depends on the source database version, the source and destination operating systems, your downtime requirements, and the personal preference of the DBA. Based upon factors there is a method available to best fit your organization needs.
Oracle Data Guard ensures high availability, disaster recovery and data protection for enterprise data. This enable production Oracle databases to survive disasters and data corruptions. Oracle 18c and 19c offers many new features it will bring many advantages to organization.
Oracle 12c offers many new features and upgrading database can bring many advantages to organization. There are various upgrade and migration methods available and the best method to use for your upgrade/migration scenario depends on the source database version, the source and destination operating systems, your downtime requirements, and the personal preference of the DBA. Based upon factors there is a method available to best fit your organization needs.
Oracle Real Application Clusters 19c- Best Practices and Internals- EMEA Tour...Sandesh Rao
In this session, I will cover under-the-hood features that power Oracle Real Application Clusters (Oracle RAC) 19c specifically around Cache Fusion and Service management. Improvements in Oracle RAC helps in integration with features such as Multitenant and Data Guard. In fact, these features benefit immensely when used with Oracle RAC. Finally we will talk about changes to the broader Oracle RAC Family of Products stack and the algorithmic changes that helps quickly detect sick/dead nodes/instances and the reconfiguration improvements to ensure that the Oracle RAC Databases continue to function without any disruption
“A new multitenant architecture that easily deploy and manage database clouds. Innovations such as Oracle Multitenant for consolidating multiple databases, Automatic Data Optimization for compressing and tiering data at a higher density also maximize resource efficiency and flexibility. These unique advancements, combined with major enhancements in availability, security, and big data support, ideal platform for private and public cloud deployments.”
Oracle RAC 12c (12.1.0.2) Operational Best Practices - A result of true colla...Markus Michalewicz
This is the latest version of the Oracle RAC 12c (12.1.0.2) Operational Best Practices presentation as shown during IOUG / Collaborate15. As best practices are a result of true collaboration this will probably be the last version before OOW 2015.
Smart monitoring how does oracle rac manage resource, state ukoug19Anil Nair
An important requirement for HA and to provide scalability is to detect problems and resolve them quickly before the user sessions get affected. Oracle RAC along with its Family of Solutions work together cohesively to detect conditions such as "Un-responsive Instances", Network issues quickly and resolve them by either redirecting the work to other instances or redundant network paths
Presented at the Dallas Oracle Users Group
By Nabil Nawaz
sponsored by BIAS Corporation
Oracle DataPump is an excellent tool for cloning databases and schemas and it is widely used as a common toolset today among DBAs and Developers to transfer data and structure between databases. Please come and learn about new Data pump features for Oracle version 12.2. We will also be sharing a case study for a large multi-terabyte database for optimizing a data pump import process that originally ran for more than a day and then the process was tuned to run in just about 4-6 hours a nearly 90% performance enhancement. The tips that will be shared will be of great value and help to ensure you are able to have a well-tuned import process with DataPump.
Introduction to Oracle Data Guard BrokerZohar Elkayam
This is an old deck I recently renewed for a customer session. This is the introduction to Oracle Data Guard broker feature, how to deploy it, how to use it and what are its benefits.
This presentation is based on version 11g but most of it is also compatible to Oracle 12c,
Agenda:
- Oracle Data Guard overview
- Dataguard broker introduction
- Configuring and using the data guard
- Live Demos
Oracle Real Application Clusters 19c- Best Practices and Internals- EMEA Tour...Sandesh Rao
In this session, I will cover under-the-hood features that power Oracle Real Application Clusters (Oracle RAC) 19c specifically around Cache Fusion and Service management. Improvements in Oracle RAC helps in integration with features such as Multitenant and Data Guard. In fact, these features benefit immensely when used with Oracle RAC. Finally we will talk about changes to the broader Oracle RAC Family of Products stack and the algorithmic changes that helps quickly detect sick/dead nodes/instances and the reconfiguration improvements to ensure that the Oracle RAC Databases continue to function without any disruption
“A new multitenant architecture that easily deploy and manage database clouds. Innovations such as Oracle Multitenant for consolidating multiple databases, Automatic Data Optimization for compressing and tiering data at a higher density also maximize resource efficiency and flexibility. These unique advancements, combined with major enhancements in availability, security, and big data support, ideal platform for private and public cloud deployments.”
Oracle RAC 12c (12.1.0.2) Operational Best Practices - A result of true colla...Markus Michalewicz
This is the latest version of the Oracle RAC 12c (12.1.0.2) Operational Best Practices presentation as shown during IOUG / Collaborate15. As best practices are a result of true collaboration this will probably be the last version before OOW 2015.
Smart monitoring how does oracle rac manage resource, state ukoug19Anil Nair
An important requirement for HA and to provide scalability is to detect problems and resolve them quickly before the user sessions get affected. Oracle RAC along with its Family of Solutions work together cohesively to detect conditions such as "Un-responsive Instances", Network issues quickly and resolve them by either redirecting the work to other instances or redundant network paths
Presented at the Dallas Oracle Users Group
By Nabil Nawaz
sponsored by BIAS Corporation
Oracle DataPump is an excellent tool for cloning databases and schemas and it is widely used as a common toolset today among DBAs and Developers to transfer data and structure between databases. Please come and learn about new Data pump features for Oracle version 12.2. We will also be sharing a case study for a large multi-terabyte database for optimizing a data pump import process that originally ran for more than a day and then the process was tuned to run in just about 4-6 hours a nearly 90% performance enhancement. The tips that will be shared will be of great value and help to ensure you are able to have a well-tuned import process with DataPump.
Introduction to Oracle Data Guard BrokerZohar Elkayam
This is an old deck I recently renewed for a customer session. This is the introduction to Oracle Data Guard broker feature, how to deploy it, how to use it and what are its benefits.
This presentation is based on version 11g but most of it is also compatible to Oracle 12c,
Agenda:
- Oracle Data Guard overview
- Dataguard broker introduction
- Configuring and using the data guard
- Live Demos
This is an enterprise plugin for DB2 monitoring. It is able to check the status of an instance connection and report it back to pandora. For more information visit the following webpage: http://pandorafms.com/index.php?sec=Library&sec2=repository&lng=es&action=view_PUI&id_PUI=396
Merging and Migrating: Data Portability from the TrenchesAtlassian
Atlassian products contain lots of data, and often there isn't just one Jira system in use. Be it moving to or from the Cloud, or between instances - merging and migrating data can be hard. Dan Hardiker from Adaptavist will share the challenges they face and solutions they've found to common data portability issues. Learn some best practices, including their standardised Export-Transform-Load approach, and uncover many hidden gremlins you may trip over along the way. After this sessions you'll be ready to fearlessly move data from one Jira instance to another!
Similar to Dg broker & client connectivity - High Availability Day 2015 (20)
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Dg broker & client connectivity - High Availability Day 2015
1. Oracle Data Guard Broker
and
Managing Client Connectivity
By Mr. AKAL SINGH (OCM)
2. Oracle Data Guard Broker: Features
• Automated creation of Data Guard configurations
incorporating a primary database, a new or existing standby
database, redo transport services, and log apply services.
• Adding new or existing standby databases to Data Guard
configuration.
• Managing an entire Data Guard configuration (including all
databases, redo transport services, and log apply services)
• Monitoring the status of the entire configuration, capturing
diagnostic information, reporting statistics, and detecting
problems
• With the broker, you can perform all management operations
locally or remotely with easy-to-use interfaces:
– Oracle Enterprise Manager Grid Control
– DGMGRL (a command-line interface)
3. Data Guard Broker: Components
• Client-side:
– Oracle Enterprise Manager Grid Control
– DGMGRL (command-line interface)
• Server-side: Data Guard monitor
– DMON process
– Configuration files
4. Data Guard Broker: Configurations
The most common configuration is a primary database at one
location and a standby database at another location.
nodePrmy
Primary
site
nodeStdby1
Standby
site
Oracle Net
6. Primary site
Standby site 30
Standby site ..
Data Guard Broker: Architecture
Graphical user interface
or
command-line interface
DMON
Archived
redo logs
Online
redo logs
Standby site 1
Oracle
Net
Standby
redo logs
Archived
redo logs
Log
apply
services
Data Guard Configuration
Log
transport
services
Primary
database
DMON
Configuration
files
Configuration
files
Standby
database
Standby
redo logs
Online
redo logs
7. Data Guard Monitor: DMON Process
• Server-side background process
• Part of each database instance in the configuration
• Created when you start the broker
• Performs requested functions and monitors the resource
• Communicates with other DMON processes in the
configuration
• Updates the configuration file
• Creates the drc<SID> trace file in the location set by the
DIAGNOSTIC_DEST initialization parameter
• Modifies initialization parameters during role transitions as
necessary
8. Benefits of Using the Data Guard Broker
• Enhances the high-availability, data protection, and
disaster protection capabilities inherent in Oracle Data
Guard by automating both configuration and monitoring
tasks
• Streamlines the process for any one of the standby
databases to replace the primary database and take over
production processing
• Enables easy configuration of additional standby databases
• Provides simplified, centralized, and extended management
• Automatically communicates between the databases in a
Data Guard configuration by using Oracle Net Services
• Provides built-in validation that monitors the health of all
databases in the configuration
9. Comparing Configuration Management
With and Without the Data Guard Broker
With the Broker Without the Broker
General Manage databases as one Manage databases separately
Creation of the
standby database
Use Grid Control wizards Manually create files
Configuration and
management
Configure and manage from
single interface
Set up services manually for
each database
Monitoring • Monitor continuously
• Unified status and reports
• Integrate with EM events
Monitor each database
individually through views
Control Invoke role transitions with
a single command
Coordinate sequences of
multiple commands across
database sites for role
transitions
10. Using the Command-Line Interface
of the Data Guard Broker
DGMGRL> connect sys/oracle_4U
Connected.
DGMGRL> show configuration verbose
Configuration - DGConfig1
Protection Mode: MaxPerformance
Databases:
Prmy - Primary database
Stdby1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
12. Data Guard Broker: Requirements
• Oracle Database Enterprise Edition
• Single-instance or multi-instance environment
• COMPATIBLE parameter: Set to 10.2.0.1.0 or later for
primary and standby databases
• Oracle Net Services network files: Must be configured for
the primary database and any existing standby databases.
Enterprise Manager Grid Control configures files for new
standby databases.
• GLOBAL_DBNAME attribute: Set to a concatenation of
db_unique_name_DGMGRL.db_domain
13. Data Guard Broker: Requirements
• DG_BROKER_START initialization parameter: Set to TRUE
• Primary database: ARCHIVELOG mode
• All databases: MOUNT or OPEN mode
• DG_BROKER_CONFIG_FILEn: Configured for any RAC
databases
Additionally :
• You must use a server parameter file (SPFILE) for
initialization parameters.
14. Data Guard Monitor: Configuration File
• The broker configuration file is:
– Automatically created and named using a default path name
and file name when the broker is started
– Managed automatically by the DMON process
• The configuration file and a copy are created at each
managed site with default names:
– dr1<db_unique_name>.dat
– dr2<db_unique_name>.dat
• Configuration file default locations are operating system
specific:
– Default location for UNIX and Linux: ORACLE_HOME/dbs
– Default location for Windows: ORACLE_HOMEdatabase
• Use DG_BROKER_CONFIG_FILEn to override the default
path name and file name.
15. Creating a Broker Configuration
1. Invoke DGMGRL and connect to the primary database.
2. Define the configuration, including a profile for the primary
database.
3. Add standby databases to the configuration.
4. Enable the configuration, including the databases.
16. Defining the Broker Configuration and
the Primary Database Profile
DGMGRL> CREATE CONFIGURATION 'DGConfig1' AS
> PRIMARY DATABASE IS prmy
> CONNECT IDENTIFIER IS prmy;
Configuration "DGConfig1" created with
primary
database "prmy“
DGMGRL>
17. Adding a Standby Database to the Configuration
DGMGRL> ADD DATABASE stdby AS
> CONNECT IDENTIFIER IS stdby;
Database “stdby" added
DGMGRL>
19. • To alter a database property:
• To alter the state of the standby database:
• To alter the state of the primary database:
Changing Database Properties and States
DGMGRL> EDIT DATABASE stdby
> SET PROPERTY LogXptMode='SYNC';
DGMGRL> EDIT DATABASE stdby SET STATE='APPLY-OFF';
DGMGRL> EDIT DATABASE prmy
> SET STATE='TRANSPORT-OFF';
When the broker configuration is enabled, the databases are in one of four states:
TRANSPORT-ON (applicable only to the primary database)
TRANSPORT-OFF (applicable only to the primary database)
APPLY-ON (applicable only to a physical or logical standby database)
APPLY-OFF (applicable only to a physical or logical standby database)
20. Managing Redo Transport Services
by Using DGMGRL
Specify database properties to manage redo transport services:
• DGConnectIdentifier
• LogXptMode
• LogShipping
21. Specifying the Connection Identifier
by Using the DGConnectIdentifier Property
• DGConnectIdentifier:
– Specifies the connection identifier that is used by the broker
to connect to a database and redo transport services
– Is set when a database is either added to the Data Guard
broker configuration to the value specified in the optional
CONNECT IDENTIFIER CLAUSE, or is extracted from the
SERVICE attribute of the LOG_ARCHIVE_DEST_n
initialization parameter
• The DGConnectIdentifier value is used to set the
FAL_SERVER and FAL_CLIENT initialization parameters.
22. Managing the Redo Transport Service
by Using the LogXptMode Property
Definitions of LOG_ARCHIVE_DEST_n Attributes
• ASYNC: Redo data that is generated by a transaction need not
have been received at a destination that has this attribute before
the transaction can commit.
• SYNC: Redo data that is generated by a transaction must have
been received by every enabled destination that has this attribute
before the transaction can commit.
• AFFIRM and NOAFFIRM: Control whether redo transport services
use synchronous or asynchronous disk I/O to write redo data to
the archived redo log files
– AFFIRM: Specifies that a redo transport destination acknowledges
received redo data after writing it to the standby redo log
– NOAFFIRM: Specifies that a redo transport destination
acknowledges received redo data before writing it to the standby
redo log
23. Managing the Redo Transport Service
by Using the LogXptMode Property
• The redo transport service must be set up for the chosen
data protection mode.
• Use the LogXptMode property to set the redo transport
services:
– ASYNC
— Sets the ASYNC and NOAFFIRM attributes of
LOG_ARCHIVE_DEST_n
— Required for maximum performance mode
– SYNC
— Sets the SYNC and AFFIRM attributes of
LOG_ARCHIVE_DEST_n
— Required for maximum protection and maximum availability
modes
24. Setting LogXptMode to ASYNC
Primary
database
transactions
RFS
MRP or
LSP
Archived redo
logs
ARC0
Standby
database
Standby
redo logs
OracleNet
(Real-time
apply)
Sets the ASYNC and NOAFFIRM
attributes of
LOG_ARCHIVE_DEST_n
Standby ack
LGWR
Online
redo
logs
Redo
buffer
LNSn
25. Setting LogXptMode to SYNC
LGWR
Primary
database
transactions
Online
redo
logs
RFS
MRP or
LSP
Archived redo
logs
ARC0
Standby
database
Standby
redo logs
OracleNet
(Real-time
apply)
Sets the SYNC and AFFIRM
attributes of
LOG_ARCHIVE_DEST_n
Standby ack
LNSn
Redo
buffer
26. Controlling the Shipping of Redo Data
by Using the LogShipping Property
• LogShipping controls whether redo transport services
can send redo data to a specified standby database.
• LogShipping is applicable only when the primary
database state is set to TRANSPORT-ON.
28. Understanding Client Connectivity
in a Data Guard Configuration
Be aware of the following issues when you manage client
connectivity in a Data Guard configuration:
• Databases reside on different hosts in a Data Guard
configuration.
• Clients must connect to the correct database:
– Primary
– Logical standby
– Snapshot standby
– Physical standby with real-time query
• If clients send connection requests to the wrong host, they
may be connected to the wrong database or receive an
error.
• Clients must automatically reconnect to the correct
database in the event of a failover.
29. Understanding Client Connectivity:
Using Local Naming
Primary
database
Standby
database
OracleNet
DG_PROD
Listener
The tnsnames.ora file would look similar to the following:
PROD = (DESCRIPTION =
(ADDRESS=(PROTOCOL = TCP)(HOST = EDBVR6P1)(PORT = 1521))
(ADDRESS=(PROTOCOL = TCP)(HOST = EDBVR6P2)(PORT = 1521))
(CONNECT_DATA = (SERVICE_NAME = DG_PROD)))
30. Preventing Clients from Connecting
to the Wrong Database
• Use database services to prevent clients from connecting
to the wrong database in the Data Guard configuration.
• Database services act as an abstraction layer between the
client and database instances.
• Database services register with listeners.
• Clients connect to database services instead of database
instances.
• Listeners use registration details to determine which
instances support a particular service at a particular
moment in time.
• Listeners then direct connection requests to the correct
instances; otherwise, the appropriate error is returned.
31. Oracle Services
• To manage workloads or a group of applications, you can
define services for a particular application or a subset of an
application’s operations.
• You can also group work by type under services.
• For example OLTP users can use one service while batch
processing can use another to connect to the database.
• Users who share a service should have the same service-
level requirements.
• Use srvctl or Enterprise Manager to manage services,
not DBMS_SERVICE.
32. Default Service Connections
• Application services:
– Limit of 115 services per database
• Internal services:
– SYS$BACKGROUND
– SYS$USERS
– Cannot be deleted or changed
• A special Oracle database service is created by default for
the Oracle RAC database.
• This default service is always available on all instances in
an Oracle RAC environment.
33. Managing Services
• Database services can be managed by using the
DBMS_SERVICE package when Oracle Restart is not used.
• Database services attributes:
– Service Name: For administration of the service
– Network Name: For services that are implemented for
external client connections
– Transparent Application Failover (TAF) attributes: For TAF-
enabled client connections
36. Connecting Clients to the Correct Database
• Use a database event trigger to ensure that clients connect
to a database in the Data Guard configuration that is in the
correct state and role.
• If no database is in the correct state and role, the trigger
ensures that clients do not connect to a database.
• Use the trigger to start database services.
– DG_PROD: Primary database
– DG_RTQ: Physical standby database opened in READ ONLY
mode (Real-time Query)
37. Creating the AFTER STARTUP Trigger
CREATE TRIGGER MANAGE_SERVICES AFTER STARTUP ON DATABASE
DECLARE
ROLE VARCHAR(30);
OMODE VARCHAR(30);
BEGIN
SELECT DATABASE_ROLE INTO ROLE FROM V$DATABASE;
SELECT OPEN_MODE INTO OMODE FROM V$DATABASE;
IF ROLE = 'PRIMARY' THEN
DBMS_SERVICE.START_SERVICE ('DG_PROD');
ELSIF ROLE = 'PHYSICAL STANDBY' THEN
IF OMODE LIKE 'READ ONLY%' THEN
DBMS_SERVICE.START_SERVICE ('DG_RTQ');
END IF;
END IF;
END;
/
38. Configuring Role-Based Services
• Use SRVCTL to configure Oracle Clusterware–managed
services on each database in the Data Guard
configuration.
• Role changes managed by the Data Guard broker
automatically start services appropriate to the database
role.
• The service is started when ROLE matches the current role
of the database and MANAGEMENT POLICY is set to
AUTOMATIC.
• Services can be started manually.
srvctl add service -d <db_unique_name> -s <service_name>
[-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY]
[,SNAPSHOT_STANDBY]]
[-y {AUTOMATIC | MANUAL}]
39. Example: Configuring Role-Based Services
• PAYROLL: Read-write service that always runs on the
database with the primary role
• ORDERSTATUS: Read-only service that always runs on an
Active Data Guard standby database
srvctl add service -d prmy -s DG_PROD -l PRIMARY
–m BASIC –e SELECT –w 1 –z 180
srvctl add service –d prmy –s DG_RTQ
–l PHYSICAL_STADBY
41. Automatic Failover of Applications to a New
Primary Database
In previous Oracle Database releases, user-written database
triggers were required to implement automatic failover as
follows:
• A startup trigger was used to start database services on
the new primary database.
• A role-change trigger was used to publish a FAN ONS
event to break JDBC clients still connected to the original
primary database out of a TCP timeout.
In Oracle Database 11g Release 2 (11.2), you can automate
fast failover of applications to a new primary database without
the need for user-written triggers. You must use the Data
Guard broker to use this feature.
42. Automatic Failover of Applications to a New
Primary Database
Primary
database
Database
services
Primary site Standby site
Application Tier
Oracle Application
Server Clusters
Database Tier
Oracle Real
Application
Clusters
Manual or
automatic failover
1
2
3
43. Data Guard Broker and Fast Application
Notification (FAN)
• The Data Guard broker publishes FAN events at failover
time.
• Applications respond to FAN events without programmatic
changes if using Oracle-integrated database clients:
– Oracle Database JDBC
– Oracle Database Oracle Call Interface (OCI)
– Oracle Database ODP.NET
• Clients that receive FAN events can be configured for Fast
Connection Failover (FCF) to automatically connect to a
new primary database.
• Clients connect to the new primary database using an
Oracle Net connect descriptor configured for connect-time
failover.
44. Automating Client Failover
in a Data Guard Configuration
• Relocating database services to the new primary database
as part of a failover operation
• Notifying clients that the failure has occurred
• Redirecting clients to a new primary database
45. Summary
In this session, you should have learned how to:
• Understanding Data Guard Broker
• Configuring Data Guard Broker
• Configure client connectivity in a Data Guard configuration
• Implement failover procedures to automatically redirect
clients to a new primary database