This is an opportunity to describe how we can now support inter database Resource Management. The dials are simply showing that as higher priority workloads come on the lower priority ones release resources. We aren't being specific about the resource. It could be CPU or Parallel Query Servers.
Erp never moves to 12.x
Parches, versiones, solo lo haces una vez para todas las PDBs…
19
Concepto de semilla de una PDB: template con la configuración de DB, clonas y lo despliegas en el Container destino (Dev o Prod).
22
Fast sync means don't wait for confirmation that the remote site has written to disk before acknowledging the remote propagation has completed. Improved throughput for marginal increase in risk.
Typically a distance greater than 100-150 miles is where latency becomes punitive for sync standby. It is possible to run at distances greater than this but you experience a degradation in performance… Typically 1ms per 100miles.
1.9.2.7 Oracle Database Vault Mandatory Realms Oracle Database Vault mandatory realms block both DBA privileges and direct object privilege grants, including the object owner. Traditional Oracle Database Vault realms protect against the common DBA ANY system privileges, preventing privileged users from accessing realm-protected objects using their SELECT ANY privilege. With the mandatory realm, users with direct object privileges, including the object owner, are blocked from accessing realm protected objects as well. As with traditional realms, users who need access are authorized using the realm authorization capability of Oracle Database Vault. Oracle Database Vault mandatory realms provide increased protection for sensitive application tables that exist within a larger application. Using this feature, application tables that contain highly sensitive information can be placed in a mandatory realm and users with direct object grants will be blocked from accessing data contained in those tables. Mandatory realms can also be used in situations where database administrators, support analysts, or developers need temporary access to an application schema but access to specific application tables needs to be blocked.
Points to communicate:
Emphasize security and compliance benefit
Redacts specific columns in the database
Works for tables, views, and materialized views
Use Cases:
Existing Applications
Screens, reports dashboards, panels …
Decision Support Systems
Data warehouse and BI
Exported spreadsheets
Target data:
Sensitive or regulated data
Data with structure
Stored in columns
Data needs to be redacted in almost all cases
33
Row format is optimized for OLTP workloads.
OLTP operations tend to access only a few rows but touch all of the columns.A row format allows quick access to all of the columns in a record since all the datafor a given record are kept together in-memory and on-storage. Since all data for a row is kepttogether, much of the row data will be brought into the CPU with a single memory reference. Row format is also much more efficient for row updates and inserts.
Analytical workloads access few columns but scan the entire data set. They also typically require some sort of aggregation. A columnar format allows for much faster data retrieval when only a few columns in a table are selected because all the data for a column is kepttogether in-memory and a single memory access will load many column values into the CPU.It also lends itself to faster filtering and aggregation, making it the most optimzed format for analytics.
Up until now you have been force to pick just one format and suffer the tradeoff of either sub-optimal OLTP or sub-optimal Analytics.
Other databases have row and column formats but you must choose ONE format for a given table.
Therefore you get either fast OLTP or fast Analytics on that table but not both.
Oracle’s unique dual format architecture allows data to be stored in both row and column format simultaneously. This eliminates the tradeoffs required by others.
Up until now, this could only be achieved by having a second copy of the table
(Data Mart, Reporting DB, Operational Data Store), which adds cost and complexity
to the environment, requires additional ETL processing and incurs time delays.
With Oracle’s unique approach, there is a single copy of the table on storage.
So there are no additional storage costs, synchronization issues, etc.
The Oracle optimizer is In-Memory aware. It has been optimized to automatically
route analytic queries to the column store, and OLTP queries to the row store.
Up until now, the only way to run analytic queries with an acceptable response on an OLTP environment was to create specific indexes for these queries.
The good thing about indexes is that they are extremely scalable. They work well in-memory and also are extremely efficient on-disk since they minimize disk IO needed to find the requested data.
All of these additional indexes need to be maintained as the data changes,
which increase the elapse time for each of these changes.
The In-Memory Column store can now remove the need for additional analytic indexes if tables fit in memory.
Makes DML faster and reduces the overall storage space required for system.
And unlike a pure in-memory database, if the system should crash and need to restart your business can still operate fully.
OLTP queries and updates (the heart of any transaction based system) will perform just as they always do against the indexed row store.
Analytical queries will execute slowly until the In-Memory Column Store is populated, but they will still run.
You don’t have to wait for all of the data to be populated in memory before resuming your business.
Removing the need for analytic indexes greatly simplifies tuning and reduces ongoing administration.
Queries on large tables no longer need to be indexed to perform well.