This document discusses managing space for databases, including:
- Using 4KB sector disks and specifying disk sector sizes when creating databases, data files, and redo log files.
- Transporting tablespaces and databases between platforms using RMAN and Data Pump utilities.
- The process involves making tablespaces read-only, converting data files to the target platform format, importing metadata, and making tablespaces read/write on the target system.
Migrating Oracle Databases to Exadata requires careful preparation to simplify and optimize databases for best performance and availability. The document discusses key points:
1. Preparation is essential to remove unnecessary objects and optimize databases before migrating.
2. Different migration methods like transportable tablespaces, data pump, or GoldenGate have advantages depending on environment and goals.
3. A fast network reduces migration time, but other bottlenecks like source system I/O or small transfers must also be addressed.
Upgrade to IBM z/OS V2.4 technical actionsMarna Walle
Yes, "upgrade" is the new name for these traditional "migration" sessions! This is part one of a two-part session that will be of interest to System Programmers and their managers who are upgrading to z/OS 2.4 from either z/OS 2.2 or 2.3. It is strongly recommended that you review sessions for a complete upgrade picture.
The general availability date for z/OS V2.4 was September 30, 2019.
AIX 6.1 introduces several new security features including role-based access control (RBAC) which allows privileged tasks to be delegated to non-privileged users. It also includes an encrypted filesystem that encrypts data for protection and an updated security tool called AIX Security Expert for centralized security management. The document discusses these features and others such as the new secure by default installation option and systems director console.
Student guide power systems for aix - virtualization i implementing virtual...solarisyougood
The document describes key concepts of logical partitioning for IBM Power Systems, including:
1) Partitions allocate system resources to create logically separate systems within the same physical footprint managed by the PowerVM Hypervisor.
2) Resources like processors, memory, and I/O can be dynamically allocated to partitions using DLPAR.
3) Advanced features allow shared processor pools, virtual I/O, live partition mobility, and capacity on demand.
4) The Hardware Management Console (HMC) manages partition configuration and resources.
IBM Storwize V7000 — unikátní virtualizační diskové poleJaroslav Prodelal
IBM Storwize V7000 is a mid-range storage system that can virtualize external storage. It has dual controllers with 16GB of cache total. It supports SAS expansion enclosures with SSD, SAS, or NL-SAS drives in 2.5" or 3.5" formats. The system provides scalable storage with a maximum of 960TB capacity and supports RAID 0, 1, 5, 6 and 10. It offers high-speed connectivity through 8Gb FC, 1Gb iSCSI, and 10Gb Ethernet ports.
Migrating Oracle Databases to Exadata requires careful preparation to simplify and optimize databases for best performance and availability. The document discusses key points:
1. Preparation is essential to remove unnecessary objects and optimize databases before migrating.
2. Different migration methods like transportable tablespaces, data pump, or GoldenGate have advantages depending on environment and goals.
3. A fast network reduces migration time, but other bottlenecks like source system I/O or small transfers must also be addressed.
Upgrade to IBM z/OS V2.4 technical actionsMarna Walle
Yes, "upgrade" is the new name for these traditional "migration" sessions! This is part one of a two-part session that will be of interest to System Programmers and their managers who are upgrading to z/OS 2.4 from either z/OS 2.2 or 2.3. It is strongly recommended that you review sessions for a complete upgrade picture.
The general availability date for z/OS V2.4 was September 30, 2019.
AIX 6.1 introduces several new security features including role-based access control (RBAC) which allows privileged tasks to be delegated to non-privileged users. It also includes an encrypted filesystem that encrypts data for protection and an updated security tool called AIX Security Expert for centralized security management. The document discusses these features and others such as the new secure by default installation option and systems director console.
Student guide power systems for aix - virtualization i implementing virtual...solarisyougood
The document describes key concepts of logical partitioning for IBM Power Systems, including:
1) Partitions allocate system resources to create logically separate systems within the same physical footprint managed by the PowerVM Hypervisor.
2) Resources like processors, memory, and I/O can be dynamically allocated to partitions using DLPAR.
3) Advanced features allow shared processor pools, virtual I/O, live partition mobility, and capacity on demand.
4) The Hardware Management Console (HMC) manages partition configuration and resources.
IBM Storwize V7000 — unikátní virtualizační diskové poleJaroslav Prodelal
IBM Storwize V7000 is a mid-range storage system that can virtualize external storage. It has dual controllers with 16GB of cache total. It supports SAS expansion enclosures with SSD, SAS, or NL-SAS drives in 2.5" or 3.5" formats. The system provides scalable storage with a maximum of 960TB capacity and supports RAID 0, 1, 5, 6 and 10. It offers high-speed connectivity through 8Gb FC, 1Gb iSCSI, and 10Gb Ethernet ports.
"Relax and Recover", an Open Source mksysb for Linux on PowerSebastien Chabrolles
This deck was presented during IBM systems technical university in London (2016).
Have you ever dreamed to have an "MKSYSB like" solution to quickly backup/restore your Linux on Power ? If the answer is YES, the opensource solution named Relax and Recover (ReaR) may be for you. Come to this session to learn more about how to implement and the capabilities of this solution through presentation and live demonstration.
The document introduces IBM Power10 entry-level and mid-range servers, including models E1050, S1024, S1022, S1022s, S1014 and L1024, L1022. It discusses Power10 processors and unique features, new flexible consumption-based pricing models including CuOD and Pay as You Go (Pools 2.0). It provides an agenda for an introduction and deep dive on Power10 entry-level and mid-range servers, encouraging organizations to upgrade from P6, P7 and P8 systems to P10.
Updating Embedded Linux devices in the field requires robust, atomic, and fail-safe software update mechanisms to fix bugs remotely without rendering devices unusable. A commonly used open source updater is SWUpdate, a Linux application that can safely install updates downloaded over the network or from local media using techniques like separate recovery systems and ping-ponging between OS images. It aims to provide atomic system image updates with rollback capabilities and audit logs to ensure devices remain functional after updates.
This is the planning part of a two-part session for system programmers and their managers who are planning on upgrading to z/OS V2.4. In part one, the focus is on preparing your current system for upgrading to either release. The system requirements to run and how to prepare your system for the upgrade are discussed. Part two covers the only upgrade details for upgrading to z/OS V2.4 from either V2.2 or V2.3. It is strongly recommended that you attend both sessions for an upgrade picture for z/OS V2.4.
The general availability date for z/OS V2.4 happened September 30 2019.
This document discusses managing memory in Oracle Database. It describes the different components of memory including the SGA and PGA. It emphasizes using Automatic Memory Management (AMM) and Automatic Shared Memory Management (ASMM) to automatically configure memory, rather than manual configuration. It provides guidelines for monitoring and optimizing memory usage.
This document discusses how Oracle manages data concurrency using locks. It describes how Oracle applies row-level locks by default to allow for high concurrency. It also discusses how to identify and resolve locking conflicts, including dealing with deadlocks which Oracle automatically detects and resolves by terminating one of the transactions.
This document discusses memory management techniques in Xen virtualization. It covers:
1) Xen uses a buddy allocator to hand out frames to guests and tracks memory usage and types with reference counts and a frametable.
2) For paravirtualized guests, Xen uses PV pagetables where the guest manages a PFN to MFN table and Xen provides a shared MFN to PFN table and checks guest pagetable contents.
3) For hardware-assisted guests, Xen supplies a second set of pagetables describing the PFN to MFN translation and access restrictions, which the CPU applies along with the guest's pagetables.
DB2 for z/OS Real Storage Monitoring, Control and PlanningJohn Campbell
Just added another hot DB2 topic around DB2 for z/OS Real Storage Monitoring, Control and Planning - Check it out and make sure your system runs safely
Engage 2018: IBM Notes and Domino Performance Boost - Reloaded Christoph Adler
Created by Christoph Adler (panagenda) & Luis Guirigay (IBM)
There is always room for improvement! Maximizing the IBM Notes client and Domino server performance doesn't have to be complicated. Reloaded for the latest IBM Notes/Domino 9 version (9.0.1 Feature Pack 10 or later), join Chris and Luis to find out the best and latest performance tuning tips. Learn how to debug your clients(s) and server(s), deal with outdated ODS, network latency, application/mail performance issues and more. Improve your IBM Notes client installations to provide a better experience for happier administration and happier end users! As a special bonus, Chris will show you how to reduce the startup time of virtualized IBM Notes Clients (Citrix / VMWare / etc).
Db2 for z/OS and FlashCopy - Practical use cases (June 2019 Edition)Florence Dubois
This document discusses several practical use cases for Db2 for z/OS and FlashCopy technology:
1) Running CHECK utilities non-disruptively using dataset-level FlashCopy to create shadow copies of data.
2) Improving object-level recovery times by exploiting FlashCopy.
3) Enabling consistent image copy backups without application outages using FlashCopy image copies (FCIC).
4) Allowing non-disruptive UNLOAD operations with Db2 HPU using dataset-level FlashCopy.
5) Creating system-level backups for recovery or cloning purposes using volume-level FlashCopy.
z16 zOS Support - March 2023 - SHARE in Atlanta.pdfMarna Walle
The document provides information about installing and configuring z/OS for the IBM z16 server. Key points include:
- z/OS V2.3 or higher is required for base support of the z16, while higher releases provide more capabilities. PTFs are categorized for required, exploitation, and recommended functions.
- SMP/E's REPORT MISSINGFIX command can identify missing z16 PTFs using fix categories rather than manually checking the PSP bucket.
- General upgrade best practices include having the latest z/OS service installed before the hardware, keeping changes limited in scope, and reviewing restrictions.
- The z/OSMF upgrade workflow provides an interactive guide to upgrading to
The document provides requirements and sample exam questions for the Red Hat Certified Engineer (RHCE) EX294 exam. It outlines 18 exam questions to test Ansible skills. Key requirements include setting up 5 virtual machines, one as the Ansible control node and 4 managed nodes. The questions cover tasks like Ansible installation, ad-hoc commands, playbooks, roles, vaults and more. Detailed solutions are provided for each question/task.
This document discusses the differences between an in-place upgrade and a side-by-side upgrade when migrating from Oracle SOA 11g to 12c. An in-place upgrade installs 12c in a new Oracle home but upgrades the domain and database in the current location. A side-by-side installs 12c in a new home and creates a new domain and database. The document outlines advantages of in-place such as no new configuration required and long-running instances continue seamlessly, but also disadvantages like potential downtime and limitations on the starting 11g version. It recommends considering a side-by-side approach if needing to take advantage of new 12c features or to avoid risks of issues during an
This document discusses managing undo data in Oracle databases. It defines undo data as a copy of original data captured for every transaction that changes data. Undo data is stored in undo segments located in an undo tablespace and is used to support rollback operations, read-consistent queries, and Flashback features. It describes how to configure and guarantee undo retention, monitor undo data usage, and use the Undo Advisor to calculate optimal undo tablespace sizing.
JCL (Job Control Language) is used on IBM mainframes to instruct the operating system how to run batch jobs and start subsystems. It acts as an interface between application programming and the MVS Operating System. JCL is used for compiling and executing batch programs, controlling jobs, allocating files, sorting files, and more. JCL uses statements like JOB, EXEC, and DD to identify the job, specify execution parameters, and define file allocations respectively.
CICS is the power of mainframe. It has all the capabilities to handle online transactions. The ppt covers highly useful CICS concepts to refresh your CICS knowledge quickly. Nicely covered on how to view the abends in CICS.
This is the planning part of a two-part session for system programmers and their managers who are planning on upgrading to z/OS V2.5. In part one, the focus is on preparing your current system for upgrading to either release. The system requirements to run and how to prepare your system for the upgrade are discussed. Part two covers the only upgrade details for upgrading to z/OS V2.5 from either V2.3 or V2.4. It is strongly recommended that you attend both sessions for an upgrade picture for z/OS V2.5.
The general availability date for z/OS V2.5 was for September 30, 2021.
This session is aimed at the regular ISPF user who wants to learn about recent features of ISPF that can make life easier, and also at those that want to learn about the new features for ISPF in z/OS V2R2.
This document discusses using Oracle tools to manage database performance through SQL tuning. It covers using the SQL Tuning Advisor to identify and tune SQL statements that use the most resources. It also discusses using the SQL Access Advisor to tune a workload and the SQL Performance Analyzer to compare SQL performance before and after changes. The objectives are to learn to use these tools to optimize SQL performance and tune applications and workloads.
This document discusses Oracle's flashback technologies including Total Recall and the recycle bin. Total Recall allows tracking of historical database changes at the table level and querying past data. The recycle bin stores dropped objects and allows restoring them. The document covers setting up Total Recall, accessing historical data, managing space usage in the recycle bin, and querying the recycle bin.
"Relax and Recover", an Open Source mksysb for Linux on PowerSebastien Chabrolles
This deck was presented during IBM systems technical university in London (2016).
Have you ever dreamed to have an "MKSYSB like" solution to quickly backup/restore your Linux on Power ? If the answer is YES, the opensource solution named Relax and Recover (ReaR) may be for you. Come to this session to learn more about how to implement and the capabilities of this solution through presentation and live demonstration.
The document introduces IBM Power10 entry-level and mid-range servers, including models E1050, S1024, S1022, S1022s, S1014 and L1024, L1022. It discusses Power10 processors and unique features, new flexible consumption-based pricing models including CuOD and Pay as You Go (Pools 2.0). It provides an agenda for an introduction and deep dive on Power10 entry-level and mid-range servers, encouraging organizations to upgrade from P6, P7 and P8 systems to P10.
Updating Embedded Linux devices in the field requires robust, atomic, and fail-safe software update mechanisms to fix bugs remotely without rendering devices unusable. A commonly used open source updater is SWUpdate, a Linux application that can safely install updates downloaded over the network or from local media using techniques like separate recovery systems and ping-ponging between OS images. It aims to provide atomic system image updates with rollback capabilities and audit logs to ensure devices remain functional after updates.
This is the planning part of a two-part session for system programmers and their managers who are planning on upgrading to z/OS V2.4. In part one, the focus is on preparing your current system for upgrading to either release. The system requirements to run and how to prepare your system for the upgrade are discussed. Part two covers the only upgrade details for upgrading to z/OS V2.4 from either V2.2 or V2.3. It is strongly recommended that you attend both sessions for an upgrade picture for z/OS V2.4.
The general availability date for z/OS V2.4 happened September 30 2019.
This document discusses managing memory in Oracle Database. It describes the different components of memory including the SGA and PGA. It emphasizes using Automatic Memory Management (AMM) and Automatic Shared Memory Management (ASMM) to automatically configure memory, rather than manual configuration. It provides guidelines for monitoring and optimizing memory usage.
This document discusses how Oracle manages data concurrency using locks. It describes how Oracle applies row-level locks by default to allow for high concurrency. It also discusses how to identify and resolve locking conflicts, including dealing with deadlocks which Oracle automatically detects and resolves by terminating one of the transactions.
This document discusses memory management techniques in Xen virtualization. It covers:
1) Xen uses a buddy allocator to hand out frames to guests and tracks memory usage and types with reference counts and a frametable.
2) For paravirtualized guests, Xen uses PV pagetables where the guest manages a PFN to MFN table and Xen provides a shared MFN to PFN table and checks guest pagetable contents.
3) For hardware-assisted guests, Xen supplies a second set of pagetables describing the PFN to MFN translation and access restrictions, which the CPU applies along with the guest's pagetables.
DB2 for z/OS Real Storage Monitoring, Control and PlanningJohn Campbell
Just added another hot DB2 topic around DB2 for z/OS Real Storage Monitoring, Control and Planning - Check it out and make sure your system runs safely
Engage 2018: IBM Notes and Domino Performance Boost - Reloaded Christoph Adler
Created by Christoph Adler (panagenda) & Luis Guirigay (IBM)
There is always room for improvement! Maximizing the IBM Notes client and Domino server performance doesn't have to be complicated. Reloaded for the latest IBM Notes/Domino 9 version (9.0.1 Feature Pack 10 or later), join Chris and Luis to find out the best and latest performance tuning tips. Learn how to debug your clients(s) and server(s), deal with outdated ODS, network latency, application/mail performance issues and more. Improve your IBM Notes client installations to provide a better experience for happier administration and happier end users! As a special bonus, Chris will show you how to reduce the startup time of virtualized IBM Notes Clients (Citrix / VMWare / etc).
Db2 for z/OS and FlashCopy - Practical use cases (June 2019 Edition)Florence Dubois
This document discusses several practical use cases for Db2 for z/OS and FlashCopy technology:
1) Running CHECK utilities non-disruptively using dataset-level FlashCopy to create shadow copies of data.
2) Improving object-level recovery times by exploiting FlashCopy.
3) Enabling consistent image copy backups without application outages using FlashCopy image copies (FCIC).
4) Allowing non-disruptive UNLOAD operations with Db2 HPU using dataset-level FlashCopy.
5) Creating system-level backups for recovery or cloning purposes using volume-level FlashCopy.
z16 zOS Support - March 2023 - SHARE in Atlanta.pdfMarna Walle
The document provides information about installing and configuring z/OS for the IBM z16 server. Key points include:
- z/OS V2.3 or higher is required for base support of the z16, while higher releases provide more capabilities. PTFs are categorized for required, exploitation, and recommended functions.
- SMP/E's REPORT MISSINGFIX command can identify missing z16 PTFs using fix categories rather than manually checking the PSP bucket.
- General upgrade best practices include having the latest z/OS service installed before the hardware, keeping changes limited in scope, and reviewing restrictions.
- The z/OSMF upgrade workflow provides an interactive guide to upgrading to
The document provides requirements and sample exam questions for the Red Hat Certified Engineer (RHCE) EX294 exam. It outlines 18 exam questions to test Ansible skills. Key requirements include setting up 5 virtual machines, one as the Ansible control node and 4 managed nodes. The questions cover tasks like Ansible installation, ad-hoc commands, playbooks, roles, vaults and more. Detailed solutions are provided for each question/task.
This document discusses the differences between an in-place upgrade and a side-by-side upgrade when migrating from Oracle SOA 11g to 12c. An in-place upgrade installs 12c in a new Oracle home but upgrades the domain and database in the current location. A side-by-side installs 12c in a new home and creates a new domain and database. The document outlines advantages of in-place such as no new configuration required and long-running instances continue seamlessly, but also disadvantages like potential downtime and limitations on the starting 11g version. It recommends considering a side-by-side approach if needing to take advantage of new 12c features or to avoid risks of issues during an
This document discusses managing undo data in Oracle databases. It defines undo data as a copy of original data captured for every transaction that changes data. Undo data is stored in undo segments located in an undo tablespace and is used to support rollback operations, read-consistent queries, and Flashback features. It describes how to configure and guarantee undo retention, monitor undo data usage, and use the Undo Advisor to calculate optimal undo tablespace sizing.
JCL (Job Control Language) is used on IBM mainframes to instruct the operating system how to run batch jobs and start subsystems. It acts as an interface between application programming and the MVS Operating System. JCL is used for compiling and executing batch programs, controlling jobs, allocating files, sorting files, and more. JCL uses statements like JOB, EXEC, and DD to identify the job, specify execution parameters, and define file allocations respectively.
CICS is the power of mainframe. It has all the capabilities to handle online transactions. The ppt covers highly useful CICS concepts to refresh your CICS knowledge quickly. Nicely covered on how to view the abends in CICS.
This is the planning part of a two-part session for system programmers and their managers who are planning on upgrading to z/OS V2.5. In part one, the focus is on preparing your current system for upgrading to either release. The system requirements to run and how to prepare your system for the upgrade are discussed. Part two covers the only upgrade details for upgrading to z/OS V2.5 from either V2.3 or V2.4. It is strongly recommended that you attend both sessions for an upgrade picture for z/OS V2.5.
The general availability date for z/OS V2.5 was for September 30, 2021.
This session is aimed at the regular ISPF user who wants to learn about recent features of ISPF that can make life easier, and also at those that want to learn about the new features for ISPF in z/OS V2R2.
This document discusses using Oracle tools to manage database performance through SQL tuning. It covers using the SQL Tuning Advisor to identify and tune SQL statements that use the most resources. It also discusses using the SQL Access Advisor to tune a workload and the SQL Performance Analyzer to compare SQL performance before and after changes. The objectives are to learn to use these tools to optimize SQL performance and tune applications and workloads.
This document discusses Oracle's flashback technologies including Total Recall and the recycle bin. Total Recall allows tracking of historical database changes at the table level and querying past data. The recycle bin stores dropped objects and allows restoring them. The document covers setting up Total Recall, accessing historical data, managing space usage in the recycle bin, and querying the recycle bin.
This document discusses database restore and recovery tasks. It describes causes of file loss like user errors, application errors, and media failures. It also discusses different recovery operations like restoring from backups, recovering redo logs, and recovering the control file. Critical vs non-critical file losses are defined. Automatic recovery of temporary files is also covered.
Tablespace point-in-time recovery (TSPITR) allows recovery of one or more tablespaces to an earlier point in time without affecting other tablespaces. It performs restore and recovery of data files for the recovery set and auxiliary set to the target time, then exports and imports metadata to make the recovered tablespaces available. TSPITR is useful for undoing DML changes or recovering from logical corruption in a subset of the database, and can be fully automated using RMAN or performed with a custom auxiliary instance.
This document provides a complete reference for the Server Control Utility (SRVCTL) in Oracle Database. It includes topics on using SRVCTL to manage configuration information for databases, instances, listeners, and other clusterware resources. The document outlines the SRVCTL command syntax and privileges required to perform administrative tasks. It also lists deprecated SRVCTL commands and options in Oracle Database 11g Release 2.
This document provides a summary of vi editor commands organized into sections on general startup, counts, cursor movement, screen movement, inserting, deleting, copying code, find commands, miscellaneous commands, line editor mode, ex commands, substitutions, reading files, write file, moving, and shell escape. It explains how to start and exit vi, move the cursor, search for text, edit text, and use ex commands.
This document provides an overview of how to create backups with RMAN (Recovery Manager) in Oracle. It discusses creating image file backups, whole database backups, full database backups, enabling fast incremental backups, duplex backup sets, backing up backup sets, multisection backups, archival backups, and reporting on and maintaining backups. The objectives are to learn how to perform various backup tasks with RMAN and manage those backups.
This document discusses how Oracle databases automatically manage space and techniques for optimizing space usage. It covers deferred segment creation, compression, monitoring tablespace usage, using the segment advisor to identify space savings opportunities, and shrinking segments to reclaim space. Resumable space allocation is also described to allow DML statements to resume if suspended due to space issues.
This document discusses diagnosing database issues and corruption. It covers the Data Recovery Advisor, which can detect, analyze, and repair failures. It also covers handling block corruption, setting up the Automatic Diagnostic Repository (ADR) to store diagnostic data, and using the Health Monitor to perform proactive database checks. Key topics include listing and advising on failures using RMAN, performing block media recovery, viewing ADR data with ADRCI, and running manual and automatic Health Monitor checks.
This document discusses user-managed database backup and recovery, including:
- The difference between user-managed and server-managed backup which uses OS commands versus RMAN.
- How to perform a complete database recovery by restoring files and archive logs and applying redo logs.
- How to perform incomplete recovery to recover to a past time or SCN by restoring files and applying redo logs until a specified point.
This document discusses using Oracle's Recovery Manager (RMAN) to perform various database recovery tasks, including recovering from the loss of data files, using incremental backups to reduce recovery time, switching to image copies for fast recovery, restoring a database to a new host, and performing disaster recovery. It provides examples of using RMAN commands like RESTORE, RECOVER, SWITCH, and SET NEWNAME to restore and recover database files from backups.
This document discusses configuring a database for recoverability. It covers placing a database in ARCHIVELOG mode, configuring multiple archive log destinations, configuring the Fast Recovery Area (FRA), and specifying retention policies. The key benefits of using the FRA are that it simplifies backup management and automatically manages disk space for recovery files.
Duplicating a database creates an identical copy of a database that can be used for testing or recovery purposes. There are multiple techniques for duplicating a database using RMAN, including duplicating from an active database, from RMAN backups, with or without connections to the target instance, recovery catalog, or using backups alone. The key steps are preparing the auxiliary instance, ensuring backups and redo logs are available, allocating auxiliary channels, and using the RMAN DUPLICATE command to restore files and recover the database.
The document discusses how to configure and use the Oracle Database Resource Manager. It describes how to create resource plans and consumer groups, specify directives to allocate resources, map consumer groups to plans, activate a resource plan, and monitor resource usage. The Resource Manager allows managing resources like CPU, parallelism, sessions, and timeouts across different workload groups.
Flashback Database allows rewinding a database to undo data corruptions or errors. It works by using redo logs and block images to restore the database to a previous state. Configuring Flashback Database requires enabling it, setting a retention target, and having the database in ARCHIVELOG mode. Operations include flashing back to a time, SCN, or restore point. Monitoring involves checking the flashback window and log sizes.
This document discusses how to configure Oracle database backup settings using Recovery Manager (RMAN). It covers setting persistent RMAN configuration settings, enabling automatic control file backups, configuring backup destinations and channels, optimizing backups, and creating compressed or encrypted backups. Key topics include using the CONFIGURE command to set backup retention policies, backup copy settings, and backup optimization parameters, as well as allocating channels and specifying backup device types and locations.
The document discusses how to automate tasks using the Oracle Database Scheduler. It describes the core components of the Scheduler including jobs, programs, schedules, and arguments. It provides examples of how to create time-based and event-based schedules. It also covers more advanced Scheduler concepts such as job chains, windows, job classes, and prioritization of jobs.
This document provides an overview of Oracle Data Guard Broker 12c Release 1 (12.1), including its components, user interfaces, benefits, and how it manages Oracle Data Guard configurations. It describes how the Oracle Data Guard Broker installs and works with Control File and Oracle Automatic Storage Management (ASM). The document outlines the management cycle of a broker configuration, state transitions, properties, and redo transport services.
Flashback technology allows users to view and recover data to previous points in time. The document discusses several Flashback features: Flashback Query lets users view data as of a past time; Flashback Version Query shows row versions between times; Flashback Table recovers an entire table; and Flashback Transaction backs out changes from a problematic transaction. The document provides examples and considerations for using each Flashback feature.
Uzstāsies: Jurix, DBACC
Tēma: Migration challenges and Migration process from IBM AIX to Oracle Solaris
Valoda: Latviešu
Tēmas apraksts:
Šajā prezentācijā pastāstīšu par savu pieredzi organizējot klienta datubāzes migrāciju uz Oracle SPARC SuperCluster. Uzdevums ir nomigrēt datubāzi no IBM AIX uz Oracle Solaris. Aprakstīšu dažus migrācijas variantus, kurus izskatījām, kā arī problēmas, kuras sagaidīja procesā.
This document discusses IBM's Elastic Storage product. It provides an overview of Elastic Storage's key features such as extreme scalability, high performance, support for various operating systems and hardware, data lifecycle management capabilities, integration with Hadoop, and editions/pricing. It also compares Elastic Storage to alternative storage solutions and discusses how Elastic Storage can be used to build private and hybrid clouds with OpenStack.
This document provides an overview of moving data in and out of Oracle databases. It describes SQL*Loader, external tables, Oracle Data Pump, and legacy Oracle export and import utilities. Key points include: SQL*Loader loads data from files, external tables access external file data as database objects, Data Pump provides high-speed data and metadata movement with tools like expdp and impdp, and legacy utilities can be used in Data Pump legacy mode.
From Windows to Linux: Converting a Distributed Perforce Helix InfrastructurePerforce
There are many advantages to running Perforce Helix on Linux servers. See the process and pitfalls encountered when converting a distributed Perforce infrastructure from Windows to Linux.
This document provides release notes for Oracle9i Release 1 (9.0.1) for Linux Intel from June 2001. It outlines system requirements, kernel parameters, installation issues, and other product-related issues. Key topics covered include requirements for memory, swap space, disk space, processors, and software. It also provides minimum recommended kernel parameter settings for shared memory and semaphores.
Presentation db2 best practices for optimal performancesolarisyougood
This document summarizes best practices for optimizing DB2 performance on various platforms. It discusses sizing workloads based on factors like concurrent users and response time objectives. Guidelines are provided for selecting CPUs, memory, disks and platforms. The document reviews physical database design best practices like choosing a page size and tablespace design. It also discusses index design, compression techniques, and benchmark results showing DB2's high performance.
This document summarizes a presentation about Ceph, an open-source distributed storage system. It discusses Ceph's introduction and components, benchmarks Ceph's block and object storage performance on Intel architecture, and describes optimizations like cache tiering and erasure coding. It also outlines Intel's product portfolio in supporting Ceph through optimized CPUs, flash storage, networking, server boards, software libraries, and contributions to the open source Ceph community.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
The document discusses various DB2 recovery options including backup and restore, the recovery process model, important recovery-related system files, advanced copy services, and transportable schemas. It provides examples of the backup and restore process models and describes key DB2 recovery-related files. It also outlines the scripted interface for advanced copy services backup and differences between DB2 versions 9.7 and 10.1 related to advanced copy services.
The document discusses various techniques for duplicating an Oracle database. It describes duplicating a database using RMAN backups with or without a connection to the target database. It also covers duplicating an active database by connecting RMAN to both the source and auxiliary instances. The key steps are choosing a duplication method, ensuring backups and redo logs are accessible, allocating auxiliary channels, and using the DUPLICATE command to restore files and recover the database.
We4IT lcty 2013 - infra-man - domino run faster We4IT Group
The document discusses optimizing performance for IBM Lotus Domino. It recommends using 64-bit hardware and operating systems to allow Domino to utilize more memory. Transaction logging and separating disks for data, transaction logs, and indexes are also advised. The document provides tips for configuring hardware, operating systems, and Domino server settings to improve performance.
This document summarizes the migration of an Oracle database from Solaris on SPARC hardware to Linux on AMD Opteron hardware. It involved moving from Oracle 10g to 10.2, changing the operating system from Solaris 8 to Red Hat Linux, and changing the database storage from raw devices to ASM. Transportable tablespaces and Data Pump were used to move the data due to issues encountered. The migration reduced load on servers and improved query performance.
EMC Data domain advanced features and functionssolarisyougood
This document provides an overview of advanced features and functions of Data Domain systems. It covers topics such as virtual tape libraries (VTL), snapshots, replication, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document consists of multiple lessons that describe these topics in detail and includes configuration examples.
E-Business Suite Rapid Provisioning Using Latest Features Of Oracle Database 12cAndrejs Karpovs
1) ACFS is an Oracle file system that can be used for rapid provisioning of databases and applications using features like snapshots and cloning.
2) Using ACFS, a database can be rapidly provisioned from a snapshot of a source database for test/development purposes in a space-efficient manner without impacting the source.
3) While EBS is not officially certified on ACFS, many customers run it successfully on ACFS without issues, placing components like logs and output on ACFS file systems. With 12c, ACFS also supports database data files directly.
(ATS4-PLAT01) Core Architecture Changes in AEP 9.0 and their Impact on Admini...BIOVIA
AEP 9.0 will see several changes to the core infrastructure which will require changes to the way the server is managed as well as new deployment options that may affect the ways that protocol developers deliver content to their users. We will cover the addition of Tomcat as a new side by side service with Apache, new administration features: exporting and importing server configurations, maintenance mode, and new deployment options: HTTPS and HTTP only modes, deploying behind reverse proxies, and HTTP load balancing.
[Pgday.Seoul 2018] PostgreSQL 성능을 위해 개발된 라이브러리 OS 소개 apposhaPgDay.Seoul
This document introduces AppOS, an operating system specialized for database performance. It discusses how AppOS improves on Linux by being more optimized for database workloads through techniques like specialized caching, I/O scheduling based on database priorities, and atomic writes. It also explains how AppOS is portable, high performing, and extensible to support different databases through its modular design. Future plans include improving cache management, parallel query optimization, and cooperative CPU scheduling.
Exploring the Oracle Database Architecture.pptMohammedHdi1
The document provides an overview of the Oracle database architecture, including its major components, memory structures, background processes, storage structures, and how they interact. It describes the system global area (SGA) and program global area (PGA) memory structures. It outlines the roles of key background processes like the database writer, log writer, and checkpoint processes. It also maps the logical and physical storage structures of the database from segments to data blocks, and describes the role of Automatic Storage Management (ASM) in managing database storage.
The document provides an overview of the Oracle database architecture, including its major components, memory structures, background processes, storage structures, and how they interact. It describes the system global area (SGA) and program global area (PGA) memory structures. It outlines the roles of key background processes like the database writer, log writer, and checkpoint processes. It also maps the logical and physical storage structures of the database from segments and extents down to data blocks and disk blocks. Finally, it gives an overview of Automatic Storage Management (ASM) and how it manages Oracle database files and storage.
This document discusses database performance monitoring and tuning. It covers monitoring sessions and services, database replay for testing, and collecting optimizer statistics. The key activities for performance management are planning, instance tuning, and SQL tuning. Performance is monitored using views for sessions, services, wait events, and statistics. Tuning involves identifying and addressing the greatest resource bottlenecks. Database replay captures production workloads to test systems with realistic data.
This document discusses monitoring and tuning RMAN backup and restore performance. It describes how to configure RMAN for asynchronous I/O and multiplexing, monitor job progress, identify bottlenecks, and balance backup speed versus recovery speed. Specific parameters like MAXPIECESIZE, FILESPERSET, and MAXOPENFILES are examined for their effect on performance.
This document discusses using a recovery catalog with RMAN for database backups and recovery. It covers:
1. The benefits of using a recovery catalog over just the control file, such as storing more historical data.
2. Creating a recovery catalog which involves configuring a catalog database, creating an owner, and generating the catalog.
3. Registering target databases with the catalog and maintaining the catalog's synchronization with database changes.
This document provides an overview of Oracle database concepts and tools. It describes the core components of an Oracle database including the database, server processes, memory structures, and client/server architecture. It also outlines the tools used to configure an Oracle database such as the Oracle Universal Installer, Database Configuration Assistant, and command line utilities. Automatic Storage Management (ASM) is discussed as the preferred storage management solution.
This document provides an introduction and schedule for an Oracle Database 11g administration course. The course objectives include configuring the database for recovery, performing backups and recovery with RMAN, using flashback technology, monitoring and tuning performance, automating tasks with the scheduler, managing space, and duplicating databases. The course will cover these topics over 5 days with lessons on backup and recovery, memory management, performance tuning, resource management, and space management. Examples will use the HR sample schema which includes tables for regions, countries, locations, departments, jobs, employees, and job history.
This document provides an introduction and overview of the Oracle Database SQL Tuning Guide. It was authored by Lance Ashdown and Maria Colgan, and is dedicated to Mark Townsend. The guide contains information about SQL processing, the query optimizer, query transformations, access paths, join methods, generating and reading execution plans, and more. It is intended to help database administrators and developers tune SQL statements for optimal performance.
High availability overview: Oracle Database 12cFemi Adeyemi
This document provides an overview of Oracle Database high availability features. It discusses key high availability concepts like recovery time objective and recovery point objective. It also describes several Oracle high availability solutions like Oracle Data Guard, Oracle GoldenGate, Oracle Real Application Clusters, and Oracle Automatic Storage Management. The document is intended to help readers understand how to maximize availability and protect against planned and unplanned downtime.
This document discusses how to harden the Solaris operating system to prevent attacks. It provides guidance on configuring the kernel, filesystem permissions, user accounts, services, and network settings to improve security. A variety of tools are also summarized that can help automate and guide the hardening process, such as Fix-modes, Titan, Jass, and Yassp. Regular patching is emphasized as a key part of maintaining a hardened Solaris system.
Database Storage
The database consists of both physical structures and logical structures. Because the physical and logical structures are separate, the physical storage of data can be managed without affecting access to logical storage structures.
Disks, that are a primary storage medium for database, currently have predominantly a sector size of 512 bytes, but the larger, 4 KB-sector disks are beginning to appear on the market, which offer higher storage capacity with a lower overhead. Oracle databases access the hard disk via a platform-specific device driver. (The database writer and log writer [and ASM processes] can write directly to disk without going through the OS.)
Oracle Database 11g Release2 detects the disk sector size and uses high-capacity disks without performance degradation (because of internal optimizations that reduce, for example, potential waste of redo space, which you might expect with applications such as an email system that has many short transactions).
Supporting 4-KB Sector Disks
4-KB sector disks have physical sectors (shown in gray) and logical sectors (shown in blue). There are two types of 4-KB sector disks: emulation mode and native mode.
4-KB sector disks in emulation mode have eight logical sectors per one physical sector (as shown in the slide). They maintain a 512-byte interface to their 4-KB physical sectors—that is, the logical block address (LBA) references 512 bytes on disk.
Performance can be decreased in emulation mode because the disk drive reads the 4 KB sector into disk cache memory, changes the 512-byte section, and then writes the entire 4 KB sector back to disk.
4-KB sector disks in native mode have one logical sector per physical sector (as shown in the slide). So, there is only the 4-KB interface. That is, the LBA references 4,096 bytes on disk.
Using 4-KB Sector Disks
In Oracle Database 11gR2, 4-KB sector disks mainly affect the redo log files. This includes online redo logs, standby redo logs, and archive logs. Oracle recommends that you create 4-KB block size logs on 4-KB emulation mode disks. On 4-KB native mode disks, you must create 4-KB block size logs.That is, the redo block size must match the physical disk sector size (for 512-byte and for 4-KB native mode disks). Otherwise, you receive the ORA-1378 error. For 4-KB emulation mode disks, the redo block size could be 512 or 4,096 bytes. 4 KB is the preferred block size. When you are creating 512-byte blocks on a 4-KB emulation disk, a warning is printed to the alert log to indicate that the mismatched block size leads to degraded performance. This also applies to ASM disk groups.
The 4-KB sector disks also affect the Oracle data file. The Oracle database allows you the creation of 2-KB block size data files on 512-byte sector disks. With 4-KB sector disks, Oracle recommends that you create 4-KB (or larger) block size data files on the 4-KB emulation mode disks. On 4-KB native mode disks, you must create 4-KB block (or larger) size logs. The control file block size is already 16 KB. Therefore, the 4-KB sector disks do not affect the control file.
Specifying the Disk Sector Size
In an Automatic Storage Management (ASM) environment, you can set the SECTOR_SIZE attribute for disk groups. This attribute can be set only at disk group creation time (by using the CREATE DISKGROUP command).
You can specify the size of the log file with the new BLOCKSIZE clause for the following commands:
ALTER DATABASE
CREATE DATABASE
CREATE CONTROL FILE
There is no additional work for you when you create a new database on 4-KB sector disks compared to creating a new database on 512-byte disks. There is no change in the GUI environments.
You have the option of using the BLOCKSIZE clause in the CREATE DATABASE command, as shown in the slide. When you do not specify a block size, the Oracle database discovers the underlying disk sector size and uses the disk sector size as the block size for the redo log creation. So by default, the redo log block size is the disk sector size, not the earlier 512-byte sector size.
Answer: 1
Answer: 2
Answer: 1
Transporting Tablespaces
Transportable tablespace is the fastest way for moving large volumes of data between two Oracle databases. Using transportable tablespaces, Oracle data files (containing table data, indexes, and almost every other Oracle database object) can be directly transported from one database to another. Furthermore, like import and export, transportable tablespaces provide a mechanism for transporting metadata in addition to transporting data.
You can use the transportable tablespace feature to move data across platform boundaries. This simplifies the distribution of data from a data warehouse environment to data marts, which often run on smaller platforms. It also allows a database to be migrated from one platform to another by rebuilding the dictionary and transporting the user tablespaces.
Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the data files containing all of the actual data are just copied to the destination location, and you use Data Pump to transfer only the metadata of the tablespace objects to the new database.
To be able to transport data files from one platform to another, you must ensure that both the source system and the target system are running on one of the supported platforms (see slide).
Note: The cross-platform transportable tablespace feature requires both platforms to be using the same character sets.
Concept: Minimum Compatibility Level
Both source and target databases need to advance their database COMPATIBLE initialization parameter to 10.0.0 or greater before they can use the cross-platform transportable tablespace feature.
When data files are first opened under Oracle Database 10g or 11g with COMPATIBLE set to 10.0.0 (or greater), the files are made platform-aware. This is represented by the check marks in the diagram. Each file identifies the platform that it belongs to. These files have identical on-disk formats for file header blocks that are used for file identification and verification. Read-only and offline files get the compatibility advanced only after they are made read/write or are brought online. This implies that tablespaces that are read-only in databases before Oracle Database 10g must be made read/write at least once before they can use the cross-platform transportable feature.
Minimum Compatibility Level
When you create a transportable tablespace set, Oracle Database computes the lowest compatibility level at which the target database must run. This is referred to as the compatibility level of the transportable set. Beginning with Oracle Database 11g, a tablespace can always be transported to a database with the same or higher compatibility setting, whether the target database is on the same or a different platform. The database signals an error if the compatibility level of the transportable set is higher than the compatibility level of the target database.
The above table shows the minimum compatibility requirements of the source and target tablespace in various scenarios. The source and target database need not have the same compatibility setting.
When data files are first opened, each file identifies the platform that it belongs to. These files have identical on-disk formats for file header blocks that are used for file identification and verification. Read-only and offline files get the compatibility advanced only after they are made read/write or are brought online.
Transportable Tablespace Procedure
To transport a tablespace from one platform to another (source to target), data files belonging to the tablespace set must be converted to a format that can be understood by the target or destination database. Although with Oracle Database, disk structures conform to a common format, it is possible for the source and target platforms to use different endian formats (byte ordering). When going to a different endian platform, you must use the CONVERT command of the RMAN utility to convert the byte ordering. This operation can be performed on either the source or the target platforms. For platforms that have the same endian format, no conversion is needed.
The slide graphic depicts the possible steps to transport tablespaces from a source platform to a target platform. However, it is possible to perform the conversion after shipping the files to the target platform. The last two steps must be executed on the target platform.
Basically, the procedure is the same as when using previous releases of the Oracle database server except when both platforms use different endian formats. It is assumed that both platforms are cross-transportable compliant.
Note: Byte ordering can affect the results when data is written and read. For example, the 2-byte integer value 1 is written as 0x0001 on a big-endian system (such as Sun SPARC Solaris) and as 0x0100 on a little-endian system (such as an Intel-compatible PC).
Determining the Endian Format of a Platform
You can query V$TRANSPORTABLE_PLATFORM to determine whether the endian ordering is the same on both platforms. V$DATABASE has two columns that can be used to determine your own platform name and platform identifier. Run the query below for a comprehensive list of supported platforms and their endian formats:
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- -------------------------------- --------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
10 Linux IA (32-bit) Little
6 AIX-Based Systems (64-bit) Big
3 HP-UX (64-bit) Big
5 HP Tru64 UNIX Little
4 HP-UX IA (64-bit) Big
11 Linux IA (64-bit) Little
15 HP Open VMS Little
Determining the Endian Format of a Platform (continued)
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- -------------------------------- --------------
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
13 Linux 64-bit for AMD Little
16 Apple Mac OS Big
12 Microsoft Windows 64-bit for AMD Little
17 Solaris Operating System (x86) Little
18 IBM Power Based Linux Big
19 HP IA Open VMS Little
20 Solaris Operating System (AMD64) Little
Using the RMAN CONVERT Command
You use the RMAN CONVERT command to convert a tablespace, data file, or database to the format of a destination platform in preparation for transport across different platforms. Input files are not altered by CONVERT because the conversion is not performed in place. Instead, RMAN writes converted files to a specified output destination.
CONVERT TABLESPACE example:
Assume that you have an ORCL database on a Linux 32-bit platform, which you want to transport to a Solaris 64-bit platform.
Connect as TARGET to the source database (mounted or open).
The tablespace must be read-only at the time of conversion.
The result is a set of converted data files in the /tmp/transport_to_solaris/ directory, with data in the right endian-order for the Solaris 64-bit platform.
Restrictions: The CONVERT command does not process user data types that require endian conversions. To transport objects between databases that are built on underlying types that store data in a platform-specific format, use the Data Pump Import and Export utilities.
For detailed prerequisites, usage, restrictions, and syntax, see the Oracle Database Backup and Recovery Reference.
Transportable Tablespaces with Enterprise Manager
Enterprise Manager can be used to implement transportable tablespaces. From the Database home page, click the Data Movement folder tab, and then click Transport Tablespaces under the Move Database Files section. Select “Generate a transportable table set” and provide the login credentials for the oracle user, and then click Continue. On the Select Tablespaces page, add the tablespaces you want to transport from the displayed list by clicking the Tablespace button, and then clicking Add. Near the bottom of the page, you must select the level of containment checking to be done before the tablespaces are processed. The containment check looks for object dependencies within the tablespaces. When you have finished, click Next. Wait a few moments while the containment check runs. Address any issues found by the check before continuing.
Transportable Tablespaces with Enterprise Manager (continued)
On the Destination Characteristics page, you must supply the destination platform and character sets. Under the Destination Database Platform section, select the operating system of the destination machine from the drop-down list. If the destination platform is different from the source platform, Enterprise Manger will perform a data conversion. Continue to the Destination Character Set section of the page and choose the destination character set and national character set from the drop-down lists. These character sets must be compatible with the source sets. When you click Next to continue, Enterprise Manager checks the compatibility of the character sets. If the chosen character sets are flagged as incompatible, you will be returned back to the Destination Characteristics page to correct your selections.
Transportable Tablespaces with Enterprise Manager (continued)
On the Schedule page, supply a meaningful description for the default job name. You can also choose to start the job immediately or schedule it for later execution. When you have made your selections, click Next to continue. On the review page, you can verify your choices before submitting the job for execution. Click the Submit Job button if the entries are correct. Click the Back button to correct any incorrect entries.
Transporting Databases
You can use the transportable tablespace feature to migrate a database to a different platform by creating a new database on the destination platform and performing a transport of all the user tablespaces. You cannot transport the SYSTEM tablespace. Therefore, objects such as sequences, PL/SQL packages, and other objects that depend on the SYSTEM tablespace are not transported. You must either create these objects manually on the destination database, or use Data Pump to transport the objects that are not moved by transportable tablespace.
To transport databases from one platform to another, you must ensure that both the source system and the target system are running on one of the platforms that are listed in V$TRANSPORTABLE_PLATFORM and that both have the same endian format. For example, you can transport a database running on Linux IA (32-bit) to one of the Windows platforms.
If one or both of the databases uses Automatic Storage Management (ASM), you may need to use the DBMS_FILE_TRANSFER package to FTP the files.
Unlike transportable tablespace, where there is a target database to plug data into, this feature creates a new database on the target platform. The newly created database contains the same data as the source database. Except for things such as database name, instance name, and location of files, the new database also has the same settings as the source database.
Note: Transporting database is faster than using Data Pump to move data.
Database Transportation Procedure: Source System Conversion
Before you can transport your database, you must open it in READ ONLY mode. Then use RMAN to convert the necessary data files of the database.
When you do the conversion on the source platform, the RMAN CONVERT DATABASE command generates a script containing the correct CREATE CONTROLFILE RESETLOGS command that is used on the target system to create the new database. The CONVERT DATABASE command then converts all identified data files so that they can be used on the target system. You then ship the converted data files and the generated script to the target platform. By executing the generated script on the target platform, you create a new copy of your database.
Note: The source database must be running with the COMPATIBLE initialization parameter set to 10.0.0 or higher. All identified tablespaces must have been READ WRITE at least once since the time that COMPATIBLE was set to 10.0.0 or higher.
Database Transportation Procedure: Target System Conversion
Before you can transport your database, you must open it in READ ONLY mode. Then use RMAN to convert the necessary data files of the database.
When you do the conversion on the target platform, the CONVERT DATABASE command (which is executed on the source system) generates only two scripts used on the target system to convert the data files, and to re-create the control files for the new database. Then, you ship the identified data files and both scripts to the target platform. After this is done, execute both scripts in the right order. The first one uses the existing RMAN CONVERT DATAFILE command to do the conversion, and the second issues the CREATE CONTROLFILE RESETLOGS SQL command with the converted data files to create the new database.
Note: The source database must be running with the COMPATIBLE initialization parameter set to 10.0.0 or higher. All identified tablespaces must have been READ WRITE at least once since COMPATIBLE was set to 10.0.0 or higher.
Database Transportation: Considerations
Redo logs, control files, and tempfiles are not transported. They are re-created for the new database on the target platform. As a result, the new database on the target platform must be opened with the RESETLOGS option.
If a password file is used, it is not transported and you need to create it on the target platform. This is because the types of file names allowed for the password file are OS specific. However, the output of the CONVERT DATABASE command lists all the usernames and their system privileges, and advises to re-create the password file and add entries for these users on the target platform.
The CONVERT DATABASE command lists all the directory objects and objects that use BFILE data types or external tables in the source database. You may need to update these objects with new directory and file names. If BFILEs are used in the database, you have to transport the BFILEs.
The generated PFILE and transport script use Oracle Managed Files (OMF) for database files. If you do not want to use OMF, you must modify the PFILE and transport script.
The transported database has the same DBID as the source database. You can use the DBNEWID utility to change the DBID. In the transport script as well as the output of the CONVERT DATABASE command, you are prompted to use the DBNEWID utility to change the database ID.