This session covers the new features and happenings in the autonomous database world and will help answer more questions DBAs and Developers will have on the Autonomous Database, from provisioning to backups, troubleshooting, tips and tricks, security and HA. This is a good introduction for on-prem DBAs who want to learn how this works quickly without spending too much time on it. Questions like what does the free tier cover, how do I do backup or if it's automated, how do I manage it, how to scale up and down, how to secure their environment, how to use mtls, how to use tools like SQLDeveloper and SQLModeler, performance tuning all in a quick 45-minute session which might take weeks to pick up reading documentation or spanning several presentations
1. What’s new in Autonomous Database in
2022
Sandesh Rao
VP AIOps for the Autonomous
@sandeshr
https://www.linkedin.com/in/raosandesh/
https://www.slideshare.net/SandeshRao4
AIOUG – Jul 2022
2. How to get started with the
Autonomous Database Free Tier
3. Always Free services enable developers and students to learn, build and get hands-on experience with
Oracle Cloud for unlimited time
Anyone can try for an unlimited time the full functionality of:
• Oracle Autonomous Database
• Oracle Cloud Infrastructure including:
• Compute VMs
• Block and Object Storage
• Load Balancer
Free tier
4. Free tier – Tech spec
2 Autonomous Databases (Autonomous Data Warehouse or Autonomous Transaction Processing), each
with 1 OCPU and 20 GB storage
2 Compute VMs, each with 1/8 OCPU and 1 GB memory
2 Block Volumes, 100 GB total, with up to 5 free backups
10 GB Object Storage, 10 GB Archive Storage, and 50,000/month API requests
1 Load Balancer, 10 Mbps bandwidth
10 TB/month Outbound Data Transfer
500 million ingestion Datapoints and 1 billion Datapoints for Monitoring Service
1 million Notification delivery options per month and 1000 emails per month
11. Low-latency connectivity between Microsoft Azure and OCI
Deploys Oracle Database on OCI, and provides metrics on Azure
Combine the full Azure catalog of AI and application services with OCI’s most powerful database
services
No charges for Oracle Interconnect for Microsoft Azure ports or data ingress/egress over the
Interconnect
Billing normally for consumption of Oracle Database services, such as Autonomous Database
Multicloud with OCI and Azure
22. Disaster recovery terminology
Peer Databases: Two or more databases that are linked and replicated
Consist of a Primary database and Standby (copy of the primary) databases
Primary or Source Database: The main database that is actively being
Standby Database: A replica of the primary database which is constantly and
passively refreshing (ie. replicating) data from the primary
Primary Region: The region in which a user first provisions a primary database and enables
cross-region Autonomous Data Guard
Remote Region: The user-selected region in which to provision the standby from a database instance while enabling cross-region
Autonomous Data Guard.
Paired Regions: Two regions that are paired together to support X-ADG, such that a primary database may be provisioned in one of
the regions and its remote standby may be provisioned in the other region.
Recovery Point Objective (RPO): An organization's tolerance for data loss, after which business operations start to get severely
impacted
Recovery Time Object (RTO): An organization's tolerance for the unavailability (or downtime) of a service after which business
operations start to get severely impacted
London Frankfurt
25. If a regional failure occurs and your primary database is brought down, you may trigger a "Failover" from the remote standby
database to which you want to failover.
Failover across regions to the remote standby
• A failover is a role change - switching control from the primary database to the
standby database
• After a failover, a new remote standby for your new primary will be
automatically provisioned
….when the primary region becomes available again
• During a failover, the system automatically recovers as much data as possible
• Minimizing any potential data loss; there may be a few seconds or minutes of
data loss
• You would usually perform a failover in a true disaster scenario, - accepting
the few minutes of potential data loss to ensure getting your database back
online as soon as possible
26. Once your remote standby is provisioned, you will see a "Switchover" option on your database's console.
Switchover testing across regions with the remote standby
• Switchover from the remote standby database, while both your primary
and standby are healthy, performs a role change - of the primary
• Switching from the primary database to the remote standby database.
• May take several minutes, depending on the number of changes in the
primary database
• Switchover guarantees no data loss
• You would usually perform a Switchover to test your applications or
mid-tiers against this role change behaviour
30. ADB now has a procedure to export a query as JSON
directly to Object Storage bucket.
The query can be an advanced query - includes joins
or subqueries.
Specify format parameter with compression option
to compress the output files.
Use DBMS_CLOUD.DELETE_OBJECT to delete the files
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name => 'DEF_CRED_NAME',
file_uri_list =>
‘bucketname/filename’,
query => 'SELECT * FROM DEPT’,
format => JSON_OBJECT('type' value
'json'));
END; /
Export Data As JSON To Object Storage
OVERVIEW HOW IT WORKS
32. Per-database with Instance Wallet selected:
• All existing database specific instance wallets will be void.
• Post rotation need to download new wallet to connect to database.
• NOTE - Regional wallets with all database certification keys continue to work
Regional level with Regional Wallet selected:
• Both regional and database specific instance wallets are voided.
• Post rotation need to download new regional or instance wallets to connect to any database in
region
• All user sessions are terminated for databases whose wallet is rotated.
• User session termination begins after wallet rotation completes, however this process does
not happen immediately.
New Option To Rotate Wallets For ADB
1
2
33.
34.
35.
36.
37. mTLS or TLS
Most people do not like to configure wallets
• ADB used mTLS to establish the client-server
connection
• Both the client and the server have a special
secret key and its exchanged to be validated
• Going forward one can connect to ADB using TLS
instead of mTLS
• To make it secure
• To enable TLS on an ADB instance with a public
endpoint exposed
• One must have an Access Control List (ACL) in
place
• Traffic outside of the VCN is blocked giving you
confidence that your connection is secured
42. Note only use of DBMS_CLOUD syntax is supported
Hybrid Partitioned Tables
BEGIN DBMS_CLOUD.CREATE_HYBRID_PART_TABLE(
table_name =>'HPT1’,
credential_name =>'OBJ_STORE_CRED’,
format => json_object('delimiter' value ',', ‘
recorddelimiter' value 'newline', ‘
characterset' value 'us7ascii’),
column_list => 'col1 number, col2 number, col3 number’
partitioning_clause => 'partition by range (col1)
(partition p1 values less than (1000) external location (
'https://swiftobjectstorage.us-ashburn-1 .../file_01.txt') ,
partition p2 values less than (2000) external location (
‘https://swiftobjectstorage.us-ashburn-1 .../file_02.txt') ,
partition p3 values less than (3000) ) )
END;
45. Set Patch Level When Creating A
Clone and retrieve Patch Details
46. Set Patch Level When Creating A Clone
When you provision or clone an Autonomous
Database instance you can select a patch level to apply
upcoming patches.
There are two patch level options: Regular and Early.
The Early patch level allows testing upcoming patches one
week before they are applied as part of the regular patching
program
The console shows the patch level setting with the section
headed Maintenance.
OVERVIEW HOW IT WORKS
47. View Autonomous Database maintenance event history to see details about past maintenance events
(requires ADMIN user)
View Patch Details OVERVIEW
48. SELECT * FROM DBA_CLOUD_PATCH_INFO;
SELECT * FROM DBA_CLOUD_PATCH_INFO WHERE PATCH_VERSION = 'ADBS-21.7.1.2';
View Patch Details HOW IT WORKS
50. Oracle Database API for MongoDB
enables developers to connect
applications directly to Oracle
Autonomous Database using MongoDB's
own drivers and tools
Oracle Database API for MongoDB
Developers can combine existing
MongoDB skills with full power of
Autonomous Database
70. ADB provides two options for Transparent Data Encryption (TDE) to encrypt data in the database:
• Oracle-managed encryption keys
• Customer-managed encryption keys
• Customer managed keys integrates with Oracle Cloud Infrastructure Vault service
• When rotating customer-managed master encryption key ADB generates a new TDE master key
• ADB uses new TDE master key to re-encrypt tablespace encryption keys that encrypt and decrypt your data.
• Operation is fast and does not require database downtime
Customer Managed Keys
OVERVIEW
75. Use Resource Principal To Access OCI Resources
1) Create a dynamic group
Tells IAM that a given Autonomous Database should be able to read from the Object
Storage buckets and objects that are in a given compartment
HOW IT WORKS
In the OCI console, go to ‘Identity and Security’ -> ‘Dynamic Groups’ -> ‘Create Dynamic Group’
To include only ADB-S instance to this dynamic group, add the instance OCID in the following rule:
resource.id = 'ocid1.autonomousdatabase.oc1.iad.osbgdthsnmakytsbnjpq7n37q'
77. Use Resource Principal To Access OCI Resources
2) Create a policy
Allow this resource to access our Object Storage bucket that resides in a given compartment
HOW IT WORKS
In the OCI console, go to ‘Identity and Security’ -> ‘Policies’-> ‘Create Policy’
Add your policy statement in plain text or use the Policy Builder.
Allow dynamic-group ctuzlaDynamicGroup to read buckets in compartment ctuzlaRPcomp
Allow dynamic-group ctuzlaDynamicGroup to read objects in compartment ctuzlaRPcomp
Note: It’s also possible to allow higher levels of access as described in the documentation
78. Use Resource Principal To Access OCI Resources
3) Enable resource principal in ADB-S
Resource principal is not enabled by default in ADB-S.
In order to be able to use resource principal in our ADB-S instance, we need to enable it using
the DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL procedure:
HOW IT WORKS
As ADMIN user, execute the following statement:
EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL();
PL/SQL procedure successfully completed.
79. Use Resource Principal To Access OCI Resources
4) Verify that resource principle is enabled:
HOW IT WORKS
SELECT owner, credential_name
FROM dba_credentials
WHERE credential_name = 'OCI$RESOURCE_PRINCIPAL' AND owner = 'ADMIN';
OWNER CREDENTIAL_NAME
----- ----------------------
ADMIN OCI$RESOURCE_PRINCIPAL
80. Use Resource Principal To Access OCI Resources
5) Optionally, enable other database users to call DBMS_CLOUD APIs using resource principal
HOW IT WORKS
EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL(username => 'ADB_USER');
PL/SQL procedure successfully completed.
81. Use Resource Principal To Access OCI Resources
6) Load data from Object Storage using resource principal
HOW IT WORKS
CREATE TABLE CHANNELS
(channel_id CHAR(1),
channel_desc VARCHAR2(20),
channel_class VARCHAR2(20)
);
Table CHANNELS created.
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name =>'CHANNELS',
credential_name =>'OCI$RESOURCE_PRINCIPAL',
file_uri_list =>'https://objectstorage.us-ashburn-
1.oraclecloud.com/n/adwc4pm/b/ctuzlaBucket/o/chan_v3.dat',
format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value 'true')
);
END;
/
PL/SQL procedure successfully completed.
83. Load data using DBMS_CLOUD
• For data loading from files in the Cloud
• Store your object storage credentials
• Use the procedure DBMS_CLOUD.COPY_DATA to load data
• The source file in this example is channels.txt
File-02 in
Object Store
Bucket
File-03 in
Object Store
Bucket
File-01 in
Object Store
Bucket
SET DEFINE OFF BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adwc_user@example.com',
password => 'password' ); END; /
84. Load data using DBMS_CLOUD
CREATE TABLE CHANNELS (channel_id CHAR(1), channel_desc VARCHAR2(20),
channel_class VARCHAR2(20) );
BEGIN DBMS_CLOUD.COPY_DATA( table_name =>'CHANNELS', credential_name
=>'DEF_CRED_NAME’,
file_uri_list =>
'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/
bucketname/o/channels.txt', format => json_object('delimiter' value ',') );
END;
BEGIN DBMS_CLOUD.COPY_DATA( table_name =>'CHANNELS', credential_name
=>'DEF_CRED_NAME’,
file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-
string/b/
bucketname/o/exp01.dmp,
https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/
bucketname/o/exp02.dmp', format => json_object('type' value 'datapump') );
END;
85. Load data using DBMS_CLOUD
BEGIN DBMS_CLOUD.COPY_COLLECTION( collection_name => 'fruit', credential_name =>
'DEF_CRED_NAME', file_uri_list => 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/
namespace-string/b/fruit_bucket/o/myCollection.json’,
format => JSON_OBJECT('recorddelimiter' value '''n''') );
END;
BEGIN DBMS_CLOUD.COPY_COLLECTION( collection_name => 'fruit2', credential_name =>
'DEF_CRED_NAME', file_uri_list => 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/
namespace-string/b/json/o/fruit_array.json’,
format => '{"recorddelimiter" : "0x''01''", "unpackarrays" : TRUE}' );
END;
86. Load data using DBMS_CLOUD
SELECT table_name, owner_name, type, status, start_time, update_time,
logfile_table, badfile_table FROM user_load_operations WHERE
type = 'COPY’;
TABLE_NAME OWNER_NAME TYPE STATUS START_TIME UPDATE_TIME LOGFILE_TABLE BADFILE_TABLE
------------------------------------------------------------------------------------
FRUIT ADMIN COPY COMPLETED 2020-04-23 22:27:37 2020-04-23 22:27:38 "" ""
FRUIT ADMIN COPY FAILED 2020-04-23 22:28:36 2020-04-23 22:28:37 COPY$2_LOG COPY$2_BAD
SELECT credential_name, username, comments FROM all_credentials;
CREDENTIAL_NAME USERNAME COMMENTS
---------------------------–----------------------------- --------------------
ADB_TOKEN user_name@example.com {"comments":"Created via
DBMS_CLOUD.create_credential"}
DEF_CRED_NAME user_name@example.com {"comments":"Created via
DBMS_CLOUD.create_credential"}
91. Thank You
Any Questions ?
Sandesh Rao
VP AIOps for the Autonomous Database
@sandeshr
https://www.linkedin.com/in/raosandesh/
https://www.slideshare.net/SandeshRao4