Vincent Van der Kussen discusses KVM and related virtualization tools. KVM is a kernel module that allows Linux to function as a hypervisor. It supports x86, PowerPC and s390 architectures. Key tools discussed include libvirt (the virtualization API), virsh (command line tool for libvirt), Qemu (runs virtual machines), and virt-tools like virt-install. The document provides an overview of using these tools to manage virtual machines and storage.
This document provides an overview of Ceph storage, including:
1) Ceph addresses challenges faced by traditional storage such as increasing data growth and legacy infrastructure limitations through a software-defined storage approach.
2) Ceph's architecture is based on RADOS which uses four daemons - monitors, object storage devices, managers, and metadata servers - to distribute and organize data across pools and placement groups.
3) Clients can access Ceph storage using the Ceph native API, Ceph block device, Ceph object gateway, or Ceph file system.
The document introduces IBM Power10 entry-level and mid-range servers, including models E1050, S1024, S1022, S1022s, S1014 and L1024, L1022. It discusses Power10 processors and unique features, new flexible consumption-based pricing models including CuOD and Pay as You Go (Pools 2.0). It provides an agenda for an introduction and deep dive on Power10 entry-level and mid-range servers, encouraging organizations to upgrade from P6, P7 and P8 systems to P10.
Rook turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.
Rook uses the power of the Kubernetes platform to deliver its services via a Kubernetes Operator for each storage provider.
Oleg Chunikhin, Co-Founder and CTO @ Kublr.com, will present an introduction to storage management on k8s using Rook and Ceph.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
Oracle Exadata is a packaged solution offering from Oracle, configured with bundled hardware, storage and database, which is touted to be optimized for handling scalable data warehouse-type workloads in query and analysis.
El documento presenta una breve historia de Exadata y las principales características de la nueva arquitectura Exadata X8M. Exadata ha evolucionado desde 2008 para ofrecer mayor rendimiento, capacidad y funcionalidades. Exadata X8M introduce procesadores más potentes, memoria persistente, una red interna más rápida y mayor capacidad de almacenamiento. El documento también analiza los aspectos a considerar para actualizar Exadata, como la incompatibilidad entre versiones y las opciones de despliegue disponibles.
Vincent Van der Kussen discusses KVM and related virtualization tools. KVM is a kernel module that allows Linux to function as a hypervisor. It supports x86, PowerPC and s390 architectures. Key tools discussed include libvirt (the virtualization API), virsh (command line tool for libvirt), Qemu (runs virtual machines), and virt-tools like virt-install. The document provides an overview of using these tools to manage virtual machines and storage.
This document provides an overview of Ceph storage, including:
1) Ceph addresses challenges faced by traditional storage such as increasing data growth and legacy infrastructure limitations through a software-defined storage approach.
2) Ceph's architecture is based on RADOS which uses four daemons - monitors, object storage devices, managers, and metadata servers - to distribute and organize data across pools and placement groups.
3) Clients can access Ceph storage using the Ceph native API, Ceph block device, Ceph object gateway, or Ceph file system.
The document introduces IBM Power10 entry-level and mid-range servers, including models E1050, S1024, S1022, S1022s, S1014 and L1024, L1022. It discusses Power10 processors and unique features, new flexible consumption-based pricing models including CuOD and Pay as You Go (Pools 2.0). It provides an agenda for an introduction and deep dive on Power10 entry-level and mid-range servers, encouraging organizations to upgrade from P6, P7 and P8 systems to P10.
Rook turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.
Rook uses the power of the Kubernetes platform to deliver its services via a Kubernetes Operator for each storage provider.
Oleg Chunikhin, Co-Founder and CTO @ Kublr.com, will present an introduction to storage management on k8s using Rook and Ceph.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
Oracle Exadata is a packaged solution offering from Oracle, configured with bundled hardware, storage and database, which is touted to be optimized for handling scalable data warehouse-type workloads in query and analysis.
El documento presenta una breve historia de Exadata y las principales características de la nueva arquitectura Exadata X8M. Exadata ha evolucionado desde 2008 para ofrecer mayor rendimiento, capacidad y funcionalidades. Exadata X8M introduce procesadores más potentes, memoria persistente, una red interna más rápida y mayor capacidad de almacenamiento. El documento también analiza los aspectos a considerar para actualizar Exadata, como la incompatibilidad entre versiones y las opciones de despliegue disponibles.
Oracle Database 19c, builds upon key architectural, distributed data and performance innovations established in earlier versions Oracle Database 12c and 18c releases. Oracle 19c has many new features, in this presentation we have covered below areas
Automated Installation, Configuration and Patching
AutoUpgrade and Database Utilities
The internals and the latest trends of container runtimesAkihiro Suda
The document discusses the internals and latest trends of container runtimes. It describes how container runtimes like Docker use kernel features like namespaces and cgroups to isolate containers. It explains how containerd and runc work together to manage the lifecycles of container processes. It also covers security measures like capabilities, AppArmor, and SELinux that container runtimes employ to safeguard the host system.
This document discusses building Oracle event mapping files to extract checked events from specific Oracle functions in 10 minutes. It explains how function parameters are passed in x86-64 calling conventions and traces a C program's execution flow using Intel Pin tools to discover undocumented events. The document promotes event hunting in Oracle as it is essentially a huge C program and provides links to event name to ID mapping files and kernel function to event mapping files to facilitate tracing Oracle events.
We will examine most of the features that this “Swiss knife” software provides. It is an in-memory fabric that fits between the database and the application layer. Apache Ignite is powered by the H2 engine. They have used it to create an in-memory distributed ACID, fully ANSI-99 complaint, Highly Available (HA) and scalable database. They have used a non-consensus (https://en.wikipedia.org/wiki/Rendezvous_hashing) clustering algorithm to be even more scalable compared to other NoSql solutions. This tool respects the relational data model that we have used for so many years and eliminates traditional problems like the “expensive joins” since it uses the RAM as the primary storage medium. We will see what this tool can do in action through hands-on examples.
NoSQL and NewSQL: Tradeoffs between Scalable Performance & ConsistencyScyllaDB
This webinar compares NoSQL and NewSQL databases. We will look at the significant architectural differences between the two, tradeoffs between availability, scalable performance and consistency, data models, and share benchmark results to display the performance implications of NoSQL versus NewSQL.
OpenStack Best Practices and Considerations - terasky tech dayArthur Berezin
- Arthur Berezin presented on best practices for deploying enterprise-grade OpenStack implementations. The presentation covered OpenStack architecture, layout considerations including high availability, and best practices for compute, storage, and networking deployments. It provided guidance on choosing backend drivers, overcommitting resources, and networking designs.
Get answers to the real time Oracle Golden gate interview questions!
Here is the link for full article: https://www.support.dbagenesis.com/post/oracle-golden-gate-interview-questions
Virtualization Uses - Server Consolidation Rubal Sagwal
Server Consolidation.
Why do we need Server Consolidation and what are the outcomes?
Benefits of Server consolidation
How to do server consolidation?
Server product architecture:
1. Virtual Machine
2. Guest OS
3. Host OS
What are server consolidation consideration?
Types of server consolidation.
Benefits of VMware over Server Consolidation.
VMware infrastructure.
Disaster recovery and backup plan.
Kubernetes Story - Day 2: Quay.io Container Registry for Publishing, Building...Mihai Criveti
Friday Brunch - a Kubernetes Story - Day 2: Build containers with Buildah, Skopeo and Quay.io https://www.youtube.com/watch?v=ygJrzMIZiWQ
In this workshop you'll learn how to build and manage containers, publish images to Quay, then install and deploy containers onto OpenShift.
Experience new tools to build, manage and deploy containerized applications following best practices. Learn how to build containers locally with podman, skopeo and buildah, publish and scan containers for vulnerabilities - and deploy containerized applications locally or on cloud using Kubernetes and OpenShift!
Mihai will take you through the process of:
Day 1 = Build: Building and running container images locally with podman, skopeo and buildah. Building containers for years or just getting started? Check out these new tools that help you build and run containers locally, and how they can help you get started with Kubernetes and OpenShift.
Learn some of the best practices on how you can build containers that run as regular users and how to automate the container build process using buildah. Learn about the Universal Base Image and how you can start your image builds from a known, trusted source.
and then over the next two Fridays the story will evolve as follows...
Day 2 = Publish: Publishing container images to quay.io and scanning containers for vulnerabilities and container best practices
Day 3 = Deploy: Getting started with OpenShift using CodeReady Containers or OKD and deploying containers on a Kubernetes Platform (Red Hat OpenShift / OKD / CRC)
This webinar gives a brief introduction to the OpenStack cloud, covering the topics:
- the OpenStack cloud platform,
- the Open Source community,
- OpenStack architecture and its main elements,
- overview of the compute, networking, block-storage e object-storage services.
If you want to know more about OpenStack, visit our website http://www.create-net.org/community/openstack-training.
The document discusses Oracle Data Guard, a disaster recovery solution for Oracle databases. It provides:
1) An overview of Data Guard, explaining that it maintains a physical or logical standby copy of the primary database to enable failover in the event of outages or disasters.
2) Details on the different types of standby databases - physical, logical, and snapshot - and how they are maintained through redo application or SQL application.
3) The various Data Guard configuration options like real-time apply, time delay, and role transitions such as switchover and failover.
This document discusses Oracle Multitenant 19c and pluggable databases. It begins with an introduction to the speaker and overview of pluggable databases. It then describes the traditional Oracle database architecture and the multitenant architecture in Oracle 19c. It discusses the different components of a container database including the root, seed PDB, and application containers. It also covers how to create pluggable databases from scratch, through cloning locally and remotely, relocating PDBs, and plugging in unplugged PDBs.
Automatic Storage Management allows Oracle databases to use disk storage that is managed as an integrated cluster file system. It provides functions like striping, mirroring, and rebalancing of data across storage disks. The document outlines new features in Oracle Exadata and Automatic Storage Management including Flex ASM, which eliminates the requirement for an ASM instance on every server, and Flex Disk Groups, which provide file groups and enable quota management and redundancy changes for databases. It also discusses enhancements to disk offline and online operations and rebalancing.
"Maximum Availability Architecture (MAA) for Oracle Database, Exadata and the Cloud" was first presented during Oracle Open World (OOW) 2019. This version of the deck has been updated for OOW London 2020 including the latest information regarding patching and upgrading the Oracle Database with Zero Downtime.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
Starting from the basics, we explore the advantages of using Rook as a Storage operator to serve Ceph storage, the leading Software-Defined Storage platform in the Open Source world. Ceph automates the internal storage management, while Rook automates the user-facing operations and effectively turns a storage technology into a service transparent to the user. The combination delivers an impressive improvement in UX and provides the ideal storage platform for Kubernetes.
A comprehensive examination of use cases and open problems will complement our review of the Rook architecture. We will deep-dive into what Rook does well, what it does not do (yet), and what trade-offs using a storage operator involves operationally. With live access to a running cluster, we will showcase Rook in action as we discuss its capabilities.
https://www.openstack.org/summit/denver-2019/summit-schedule/events/23515/storage-101-rook-and-ceph
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, QCT's lab test environment, Ceph tuning recommendations, and benefits of using multi-partitioned NVMe SSDs for Ceph OSDs.
Mahindra Group has a history of global collaboration and pioneering globalization. Its purpose is to challenge conventional thinking and enable stakeholders to rise. Tech Mahindra is a leader in engineering for next-gen connected solutions and the internet of things. It provides connected engineering and analytics services across industries like automotive, healthcare, smart cities, and renewable energy to manage the changing world.
This document provides step-by-step instructions for upgrading an Oracle database from version 10.2.0.4 to 11.2.0.2. It involves running pre-upgrade checks, backing up the database, setting environment variables to point to the new Oracle home, running upgrade scripts to upgrade the database, and performing post-upgrade tasks like recompiling objects and checking for errors. The process ensures the integrity and consistency of the upgraded Oracle software.
Oracle 10g to 11g upgrade on sap(10.2.0.5.0 to 11.2.0.3)yoonus ch
This document outlines the steps to upgrade an Oracle database from version 10g to 11g on a SAP system. The key steps are:
1. Install Oracle 11g software in a new ORACLE_HOME directory with sufficient disk space.
2. Copy configuration files like listener.ora, sqlnet.ora, and tnsnames.ora from the old Oracle home.
3. Back up the Oracle 10g database and archive logs.
4. Run the Oracle Database Upgrade Assistant (DBUA) to perform the database upgrade while the database is running in the old Oracle home.
5. Perform additional post-upgrade steps like installing required patches if the DBUA was started
Oracle Database 19c, builds upon key architectural, distributed data and performance innovations established in earlier versions Oracle Database 12c and 18c releases. Oracle 19c has many new features, in this presentation we have covered below areas
Automated Installation, Configuration and Patching
AutoUpgrade and Database Utilities
The internals and the latest trends of container runtimesAkihiro Suda
The document discusses the internals and latest trends of container runtimes. It describes how container runtimes like Docker use kernel features like namespaces and cgroups to isolate containers. It explains how containerd and runc work together to manage the lifecycles of container processes. It also covers security measures like capabilities, AppArmor, and SELinux that container runtimes employ to safeguard the host system.
This document discusses building Oracle event mapping files to extract checked events from specific Oracle functions in 10 minutes. It explains how function parameters are passed in x86-64 calling conventions and traces a C program's execution flow using Intel Pin tools to discover undocumented events. The document promotes event hunting in Oracle as it is essentially a huge C program and provides links to event name to ID mapping files and kernel function to event mapping files to facilitate tracing Oracle events.
We will examine most of the features that this “Swiss knife” software provides. It is an in-memory fabric that fits between the database and the application layer. Apache Ignite is powered by the H2 engine. They have used it to create an in-memory distributed ACID, fully ANSI-99 complaint, Highly Available (HA) and scalable database. They have used a non-consensus (https://en.wikipedia.org/wiki/Rendezvous_hashing) clustering algorithm to be even more scalable compared to other NoSql solutions. This tool respects the relational data model that we have used for so many years and eliminates traditional problems like the “expensive joins” since it uses the RAM as the primary storage medium. We will see what this tool can do in action through hands-on examples.
NoSQL and NewSQL: Tradeoffs between Scalable Performance & ConsistencyScyllaDB
This webinar compares NoSQL and NewSQL databases. We will look at the significant architectural differences between the two, tradeoffs between availability, scalable performance and consistency, data models, and share benchmark results to display the performance implications of NoSQL versus NewSQL.
OpenStack Best Practices and Considerations - terasky tech dayArthur Berezin
- Arthur Berezin presented on best practices for deploying enterprise-grade OpenStack implementations. The presentation covered OpenStack architecture, layout considerations including high availability, and best practices for compute, storage, and networking deployments. It provided guidance on choosing backend drivers, overcommitting resources, and networking designs.
Get answers to the real time Oracle Golden gate interview questions!
Here is the link for full article: https://www.support.dbagenesis.com/post/oracle-golden-gate-interview-questions
Virtualization Uses - Server Consolidation Rubal Sagwal
Server Consolidation.
Why do we need Server Consolidation and what are the outcomes?
Benefits of Server consolidation
How to do server consolidation?
Server product architecture:
1. Virtual Machine
2. Guest OS
3. Host OS
What are server consolidation consideration?
Types of server consolidation.
Benefits of VMware over Server Consolidation.
VMware infrastructure.
Disaster recovery and backup plan.
Kubernetes Story - Day 2: Quay.io Container Registry for Publishing, Building...Mihai Criveti
Friday Brunch - a Kubernetes Story - Day 2: Build containers with Buildah, Skopeo and Quay.io https://www.youtube.com/watch?v=ygJrzMIZiWQ
In this workshop you'll learn how to build and manage containers, publish images to Quay, then install and deploy containers onto OpenShift.
Experience new tools to build, manage and deploy containerized applications following best practices. Learn how to build containers locally with podman, skopeo and buildah, publish and scan containers for vulnerabilities - and deploy containerized applications locally or on cloud using Kubernetes and OpenShift!
Mihai will take you through the process of:
Day 1 = Build: Building and running container images locally with podman, skopeo and buildah. Building containers for years or just getting started? Check out these new tools that help you build and run containers locally, and how they can help you get started with Kubernetes and OpenShift.
Learn some of the best practices on how you can build containers that run as regular users and how to automate the container build process using buildah. Learn about the Universal Base Image and how you can start your image builds from a known, trusted source.
and then over the next two Fridays the story will evolve as follows...
Day 2 = Publish: Publishing container images to quay.io and scanning containers for vulnerabilities and container best practices
Day 3 = Deploy: Getting started with OpenShift using CodeReady Containers or OKD and deploying containers on a Kubernetes Platform (Red Hat OpenShift / OKD / CRC)
This webinar gives a brief introduction to the OpenStack cloud, covering the topics:
- the OpenStack cloud platform,
- the Open Source community,
- OpenStack architecture and its main elements,
- overview of the compute, networking, block-storage e object-storage services.
If you want to know more about OpenStack, visit our website http://www.create-net.org/community/openstack-training.
The document discusses Oracle Data Guard, a disaster recovery solution for Oracle databases. It provides:
1) An overview of Data Guard, explaining that it maintains a physical or logical standby copy of the primary database to enable failover in the event of outages or disasters.
2) Details on the different types of standby databases - physical, logical, and snapshot - and how they are maintained through redo application or SQL application.
3) The various Data Guard configuration options like real-time apply, time delay, and role transitions such as switchover and failover.
This document discusses Oracle Multitenant 19c and pluggable databases. It begins with an introduction to the speaker and overview of pluggable databases. It then describes the traditional Oracle database architecture and the multitenant architecture in Oracle 19c. It discusses the different components of a container database including the root, seed PDB, and application containers. It also covers how to create pluggable databases from scratch, through cloning locally and remotely, relocating PDBs, and plugging in unplugged PDBs.
Automatic Storage Management allows Oracle databases to use disk storage that is managed as an integrated cluster file system. It provides functions like striping, mirroring, and rebalancing of data across storage disks. The document outlines new features in Oracle Exadata and Automatic Storage Management including Flex ASM, which eliminates the requirement for an ASM instance on every server, and Flex Disk Groups, which provide file groups and enable quota management and redundancy changes for databases. It also discusses enhancements to disk offline and online operations and rebalancing.
"Maximum Availability Architecture (MAA) for Oracle Database, Exadata and the Cloud" was first presented during Oracle Open World (OOW) 2019. This version of the deck has been updated for OOW London 2020 including the latest information regarding patching and upgrading the Oracle Database with Zero Downtime.
CRUSH is the powerful, highly configurable algorithm Red Hat Ceph Storage uses to determine how data is stored across the many servers in a cluster. A healthy Red Hat Ceph Storage deployment depends on a properly configured CRUSH map. In this session, we will review the Red Hat Ceph Storage architecture and explain the purpose of CRUSH. Using example CRUSH maps, we will show you what works and what does not, and explain why.
Presented at Red Hat Summit 2016-06-29.
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019Sean Cohen
Starting from the basics, we explore the advantages of using Rook as a Storage operator to serve Ceph storage, the leading Software-Defined Storage platform in the Open Source world. Ceph automates the internal storage management, while Rook automates the user-facing operations and effectively turns a storage technology into a service transparent to the user. The combination delivers an impressive improvement in UX and provides the ideal storage platform for Kubernetes.
A comprehensive examination of use cases and open problems will complement our review of the Rook architecture. We will deep-dive into what Rook does well, what it does not do (yet), and what trade-offs using a storage operator involves operationally. With live access to a running cluster, we will showcase Rook in action as we discuss its capabilities.
https://www.openstack.org/summit/denver-2019/summit-schedule/events/23515/storage-101-rook-and-ceph
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, QCT's lab test environment, Ceph tuning recommendations, and benefits of using multi-partitioned NVMe SSDs for Ceph OSDs.
Mahindra Group has a history of global collaboration and pioneering globalization. Its purpose is to challenge conventional thinking and enable stakeholders to rise. Tech Mahindra is a leader in engineering for next-gen connected solutions and the internet of things. It provides connected engineering and analytics services across industries like automotive, healthcare, smart cities, and renewable energy to manage the changing world.
This document provides step-by-step instructions for upgrading an Oracle database from version 10.2.0.4 to 11.2.0.2. It involves running pre-upgrade checks, backing up the database, setting environment variables to point to the new Oracle home, running upgrade scripts to upgrade the database, and performing post-upgrade tasks like recompiling objects and checking for errors. The process ensures the integrity and consistency of the upgraded Oracle software.
Oracle 10g to 11g upgrade on sap(10.2.0.5.0 to 11.2.0.3)yoonus ch
This document outlines the steps to upgrade an Oracle database from version 10g to 11g on a SAP system. The key steps are:
1. Install Oracle 11g software in a new ORACLE_HOME directory with sufficient disk space.
2. Copy configuration files like listener.ora, sqlnet.ora, and tnsnames.ora from the old Oracle home.
3. Back up the Oracle 10g database and archive logs.
4. Run the Oracle Database Upgrade Assistant (DBUA) to perform the database upgrade while the database is running in the old Oracle home.
5. Perform additional post-upgrade steps like installing required patches if the DBUA was started
The document provides information on upgrading Oracle E-Business Suite Release 11i to Release 12, including planning, preparing, performing the upgrade, and post-upgrade tasks. Key steps include applying the latest 11i patches, running the TUMS utility, upgrading the database to at least Oracle 10g Release 2, laying down the new Release 12 technology stack, and running the upgrade driver to migrate the applications to Release 12. The document outlines important tasks for each phase of the upgrade process.
Steps for upgrading the database to 10g release 2nesmaddy
The document provides steps for upgrading an Oracle database to version 10g Release 2. It details:
1) Running scripts that check the current database configuration and requirements for upgrade.
2) Making any necessary adjustments to parameters, tablespaces, redo logs.
3) Creating scripts to recreate database links if needing to downgrade.
4) Addressing issues with data types like timestamps with timezones and national character sets.
This document provides an overview of managing the Oracle database instance. It covers starting and stopping the Oracle database and components using Oracle Enterprise Manager and SQL*Plus. It describes accessing databases with SQL*Plus and modifying initialization parameters. It also discusses the stages of database startup, shutdown options, viewing the alert log, and accessing dynamic performance views.
Oracle applications 11i hot backup cloning with rapid cloneDeepti Singh
This document provides instructions for cloning an Oracle Applications 11i environment from a production system called PRODSERVER to a test system called TESTSERVER using Rapid Clone hot backup methodology. It involves 7 stages: 1) preparing the source system, 2) putting the database in backup mode and copying files, 3) copying application files, 4) copying files to the target, 5) configuring the target database, 6) configuring the target application tier, and 7) finishing tasks like updating profiles. Key steps include applying required patches, running preclone scripts, copying database and application files, recovering the database using the backup control file, and configuring the cloned application and database tiers.
This document outlines steps to refresh a development database from a production database. It describes copying backup files including data files, redo logs, and archive logs from the production environment to the development environment. It then details replacing the development control file with the production control file, recovering the development database using the backup files, and opening the development database with a resetlogs option to synchronize it with the current state of the production database. The goal is to ensure the development database accurately reflects the current state of the production database for testing purposes.
Oracle 11g Installation With ASM and Data Guard SetupArun Sharma
In this article we will look at Oracle 11g installation with ASM storage and also setup physical standby on ASM.
We will be following below steps for our configuration:
Setup Primary Server
Setup Standby Server
Full article link is here: https://www.support.dbagenesis.com/post/oracle-11g-installation-with-asm-and-data-guard-setup
Oracle applications 11i hot backup cloning with rapid cloneDeepti Singh
This document provides instructions for cloning an Oracle Applications 11i production system (PRODSERVER) to a test system (TESTSERVER) using Rapid Clone hot backup methodology. It outlines 7 stages: 1) prerequisites, 2) prepare source, 3) backup database, 4) copy apps files, 5) copy files to target, 6) configure target database, 7) configure target app tier. Key steps include applying patches, running preclone scripts, putting source database in backup mode, copying files, recovering database on target, and configuring target system.
Oracle 11g to 12c Upgrade With Data Guard and ASMArun Sharma
In this article we will be performing Oracle 11g to 12c database upgrade with data guard and ASM configured.
Below are the steps we are going to follow to perform the database upgrade:
Upgrade GRID_HOME on standby
Upgrade ORACLE_HOME on standby
Upgrade GRID_HOME on primary
Upgrade ORACLE_HOME on primary
Post upgrade steps
Let us start the upgrade process.In this article we will be performing Oracle 11g to 12c database upgrade with data guard and ASM configured.
Below are the steps we are going to follow to perform the database upgrade:
Upgrade GRID_HOME on standby
Upgrade ORACLE_HOME on standby
Upgrade GRID_HOME on primary
Upgrade ORACLE_HOME on primary
Post upgrade steps
Let us start the upgrade process.
Full article link is here: https://www.support.dbagenesis.com/post/oracle-11g-to-12c-upgrade-with-data-guard-asm
Upgrade database using cloud_control Provisioning Monowar Mukul
This document summarizes using Oracle Cloud Control 12c to perform a multiple database upgrade from Oracle 11.2.0.4 to 12.1.0.2. The database provisioning features of OEM12c allow upgrading multiple databases simultaneously in parallel. The process selects the databases to upgrade, configures new listeners, inserts a breakpoint to review and troubleshoot, then resumes and monitors the upgrade process. Validation checks the upgrade results to confirm both databases were upgraded to 12.1.0.2 successfully.
Quickly learn how to drive patchVantage and understand the benefits using the presentation in conjunction with the AWS Cloud Instance. This is a real-time actual Oracle Database Administration session
This document provides instructions for cloning Oracle Applications Release 12 using Rapid Clone techniques. It describes completing pre-clone steps, then using the adcfgclone.pl script to clone the database tier from the source to target environment. Next, it discusses copying over application files and cloning the applications tier. The process involves running adcfgclone.pl for the database tier and applications tier, entering prompts, and monitoring logs to complete the clone.
The document provides steps for cloning Oracle E-Business Suite Release 12 using Rapid Clone techniques. It describes preparing the source environment, copying database and application tiers to the target, running cloning scripts to clone the database tier using adcfgclone.pl and then cloning the application tier. It notes potential issues like port conflicts and provides troubleshooting steps to address errors in the cloning logs. The overall process involves preparing the environments, copying files, running cloning scripts, verifying services start correctly and testing functionality in the new cloned environment.
This document provides instructions for setting up a physical standby database for an Oracle E-Business Suite Release 12.2 database using Oracle 11gR2. It describes configuring the primary database for archiving and adding standby redo logs. It also covers copying the Oracle home to the standby server, modifying initialization parameters, and using RMAN to duplicate the primary database and recover it as a physical standby. Key steps include enabling archive logging on the primary, setting the log archive destination, and starting redo transport services to ship archived logs to the standby.
This document provides steps to apply the 10.2.0.5 patch set to a 2-node Oracle RAC database on Linux x86_64. It involves upgrading Clusterware, ASM, and the database homes. Key steps include backing up components, stopping services, running root scripts, and verifying versions after upgrade. Issues encountered like file handle limits are also addressed.
This document outlines the steps to upgrade an Oracle database from version 11.2.0.4 to 12c. It includes prechecks such as validating objects, checking for duplicate objects, gathering statistics. It also details backup procedures like enabling flashback and creating a restore point. The key steps are running the preupgrade tool, disabling jobs and scripts, validating tablespaces and removing the EM repository before initiating the upgrade using DBUA.
UPGRADING FROM ORACLE ENTERPRISE MANAGER 10G TO CLOUD CONTROL 12C WITH ZERO D...Leighton Nelson
A step-by-step description of using the 2-System Method to upgrade from Oracle Enterprise Manager 10g to Enterprise Manager Cloud Control 12c while upgrading database and migrating platforms with near zero downtime.
The document discusses two techniques for upgrading a 10g Oracle RAC cluster to 11gR2 grid infrastructure (GI):
1) Creating a new cluster by uninstalling the existing 10g software, installing 11gR2 GI, and migrating the database and services to the new cluster.
2) Upgrading the existing cluster, but the existing discusses issues encountered with this approach during the rootUpgrade.sh script and cluster restart.
It also summarizes the steps taken to migrate an existing 11gR2 ASM configuration to an extended RAC configuration, distributing the disk groups across two separate storage systems.
Rman backup and recovery 11g new featuresNabi Abdul
The document discusses new features in Oracle Database 11g related to backup and recovery. It provides examples of using the Data Recovery Advisor from both the command line and GUI to diagnose and repair a missing datafile. It also demonstrates using RMAN's new VALIDATE DATABASE command to proactively check for physical corruption without writing custom scripts.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
DDS Security Version 1.2 was adopted in 2024. This revision strengthens support for long runnings systems adding new cryptographic algorithms, certificate revocation, and hardness against DoS attacks.
Revolutionizing Visual Effects Mastering AI Face Swaps.pdfUndress Baby
The quest for the best AI face swap solution is marked by an amalgamation of technological prowess and artistic finesse, where cutting-edge algorithms seamlessly replace faces in images or videos with striking realism. Leveraging advanced deep learning techniques, the best AI face swap tools meticulously analyze facial features, lighting conditions, and expressions to execute flawless transformations, ensuring natural-looking results that blur the line between reality and illusion, captivating users with their ingenuity and sophistication.
Web:- https://undressbaby.com/
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
SMS API Integration in Saudi Arabia| Best SMS API ServiceYara Milbes
Discover the benefits and implementation of SMS API integration in the UAE and Middle East. This comprehensive guide covers the importance of SMS messaging APIs, the advantages of bulk SMS APIs, and real-world case studies. Learn how CEQUENS, a leader in communication solutions, can help your business enhance customer engagement and streamline operations with innovative CPaaS, reliable SMS APIs, and omnichannel solutions, including WhatsApp Business. Perfect for businesses seeking to optimize their communication strategies in the digital age.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
What is Master Data Management by PiLog Groupaymanquadri279
PiLog Group's Master Data Record Manager (MDRM) is a sophisticated enterprise solution designed to ensure data accuracy, consistency, and governance across various business functions. MDRM integrates advanced data management technologies to cleanse, classify, and standardize master data, thereby enhancing data quality and operational efficiency.
2. Contents
Types of Upgrade
Compatability to upgrade 11g from 10g
Install 11g software
Preupgrade checkups
Upgrade
Post Upgrade Steps
Known Issues
3. Types of Upgrade
1. Export Import
The export/import process can be used to transfer data between versions. Export the
data using the expdp utility from the source version and import using the 11g version
of impdp.
2. Using DBUA
Database Upgrade Assistant(DBUA) a GUI to upgrade the DB. This method is the best and
easy way to upgrade the database. It is the mostly practised method to upgrade database.
Oracle recommends to use this method.
3. Manual Method
As the name suggest manual upgrading the database using scripts.
Compatibility Matrix
Minimum Version of the database that can be directly upgraded to Oracle 11g Release 1
Minimum Version of the Oracle database software that can be directly upgraded to Oracle 11g
Release 2
Source Database Target Database
9.2.0.8 or higher 11.2.X
10.1.0.5 or higher 11.2.X
10.2.0.2 or higher 11.2.X
11.10.6 or higher 11.2.X
Step 1 Installing Oracle 11g
We cannot upgrade the existing Oracle Home, since 11g is not a patchset. We have to
install 11g oracle home as a seperate ORACLE_HOME in parallel to 10g Oracle Home.
Example my 10g Oracle Home is : D:appAVCSOFTWAREproduct10.2.0dbhome_1
then my 11g Oracel Home is : D:appAVCSOFTWAREproduct11.2.0dbhome_1
Just a parallel 11.1.0 directory can be created and we can install oracle home in this location
4. Start 11g installation.
Click on 11g setup
Screen 1 – Configuratuion Security Updates
select “Oracle Database 11g”
Screen 2 – Select Installation Method
Choose “Install Software Only”
Screen 3 – Grid Option
Singlel instance.
Screen 4 – Select Installation Type
Choose “Enterprise Edition”
Screen 5 – Installation Location
Oracle Base as parent directory of ORACLE HOME
Oracle Base : C:oracle11g
Screen 6 – Product Specific Pre-requisite Checks
It performs checks and if everything is ok click on Next
Screen 7 – Click on Finish
Screen 8 – Summary
Click on “Install”
This will complete the software installation for Oracle Database 11g.
Create listener using netca. If not configured will be prompted to create a listener for 11g
While upgrading to 11g.
Step 2 Back Up the Database:
Before proceeding backup the 10g database that is being upgrading. Have a good backup
1- Perform Cold Backup
(or)
2- Take a backup using RMAN
Connect to RMAN:
rman "target / nocatalog"
RUN
{
ALLOCATE CHANNEL ch1 TYPE DISK;
BACKUP DATABASE PLUS ARCHIVELOGS;
}
5. Step 3 Pre-Upgrade Utility
3.1 Check for the integrity of the source database prior to starting the upgrade
ORACLE_SID = [oracle] ? SID
sqlplus '/ as sysdba'
SQL> spool D:upgrade_info1.log
SQL> @?/rdbms/admin/utlrp.sql
SQL> spool off;
SQL> purge dba_recyclebin;
Run utlrp.sql (multiple times) to validate the invalid objects in the database, until there is no change in
the number of invalid objects.
3.2 Execute Pre Upgrade Script
Go to 11g ORACLE_HOME/rdbms/admin and copy the file utlu111i.sql to some temp
location.
cd ORACLE_11gHOME
cd rdbms/admin/
copy utlu111i.sql D:temp
The utility will give the output in the form of recommendations to be implemented before
starting the upgrade. Unless these requirements are met, the upgrade will fail. Most of the
time issue come up with time zone….Then login to the 10g oracle database and run the
above sql you copied.
sqlplus “/as sysdba”
SQL>spool D:upgrade_info2.log
SQL>@/11g_oracle_home/rdbms/admin/utlu111i.sql
spool off;
Check the output of the Pre-Upgrade Information Tool in upgrade_info2.log
and fix any issues.
Obsolete/Deprecated Parameters: [Update Oracle Database 11.1 init.ora or
spfile] **************************************************
6. –> “background_dump_dest” replaced by “diagnostic_dest”
–> “user_dump_dest” replaced by “diagnostic_dest”
–> “core_dump_dest” replaced by “diagnostic_dest”
To fix this obsolete parameter, comment out from initialization parameter file and replace
with new parameter like comment above three deprecated parameters and add
*.diagnostic_dest
WARNING: –> Database is using an old timezone file version.
…. Patch the 10.2.0.1.0 database to timezone file version 4
…. BEFORE upgrading the database. Re-run utlu111i.sql after
…. patching the database to record the new timezone file version.
To find time zone file version on source database (10g) run
SQL> select * from v$timezone_file;
If time zone file version is less than 4 then apply time zone patch .
For 10.2.0.5 version is 4. So we need not apply patch here.
3.3 Check invalid objects
SQL> select object_name, owner, object_type from all_objects where status like
‘INVALID';
SQL> select count(*) from dba_objects where status = ‘INVALID’;
SQL>select comp_name,version, status from dba_registry;
SQL> select count(*) from dba_objects where status=’VALID’;
3.4 Create & backup pfile from 10g and edit pfile to suit to 11g
If using spfile, create pfile connecting to 10g db
SQL> create pfile from spfile;
This will create pfile in 10g $ORACLE_HOME/dbs/init[SID].ora
Copy initialization file (pfile) from source (10g) to target (11g)
7. Adjust initialization parameter specific to 11g like
a) Remove *.background_dump_dest, *.core_dump_dest, *.user_dump_dest and add
*.diagnostic_dest=’/11g_base’ (11g Base Directory)
b) Change *.compatible=’10.2.0.1.0′ to *.compatible=’11.1.0′ or ’11.2.0’
c) audit_trail='db'
remote_login_passwordfile='EXCLUSIVE
Create the admin directory
cd C:oracle11gadminSID
mkdir pfile
mkdir audit
Step 4) Run Pre-Upgrade Utilityagain
After executing the recommended steps, run the pre-upgrade utility once again to make
sure, you don’t get any critical warnings.
Run the pre-upgrade utility script on 10g database while connecting from 10g oracle home.
If everything looks fine, Shut down the database from 10g Oracle Home. This time make
sure you don’t have the critical warnings like the one with TIMEZONE version.
ORACLE_SID = SIDNAME
sqlplus '/ as sysdba'
SQL> @?/rdbms/admin/utlrp
SQL> purge dba_recyclebin
5. Optimizer Statistics:
When upgrading to Oracle Database 11g Release 2 (11.2), optimizer statistics are collected
for dictionary tables that lack statistics. This statistics collection can be time consuming for
databases with a large number of dictionary tables, but statistics gathering only occurs for
those tables that lack statistics or are significantly changed during the upgrade
Gather Dictionary stats:
Connect as sys user and gather statistics
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
SQL> exec dbms_stats.gather_schema_stats('SYS',options=>'GATHER', estimate_percent
=> dbms_stats.auto_sample_size, method_opt => 'FOR ALL COLUMNS SIZE AUTO',
cascade => TRUE);
8. 6. Upgrade Database:
Set ORACLE_HOME=11g home
echo %ORACLE_HOME%
dbua
screen 1 : select the operation
choose Upgrade
screen 2 : select db to be upgraded
screen 3 : prerequisite check
If the prerequisite checks highlight any issues, take the appropriate action to fix the issues.
Screen 4 : Upgrade options
a. recompile invalid objects
b.degree od parallelism 60
c.upgrade timezone
d. if you have bkp already do not select
tick these 3
Screen 5 : Management Options
Configure Entreprise Manager
Screen 6 : Move db files
choose do not select move db files as part of upgrade
spefcify fra location let it be same create directory if not present create the directory
Give the location for Diagnostic_Dest C:oracle11g
Screen 6 : Network Configuration
select listener for 11g.
if you bkp already select i have my own bkp
Screen 7 : Recovery Options
Select i have my backup
9. Screen 8 : Summary
Click on Finish. If everything goes fine Check the upgrade results, then click the "Close"
button to leave the DBUA.
OLAP ERRORS CAN BE IGNORED
Post Upgrade Steps
Step 1
set oracle_home=oracle_home of 11g
set oracle_sid=orcl
Give the name of the db that is upgraded.
Check the listener is from 11g if not perform do the following
stop listener, start & check status and notice that it has STARTED FROM 11g HOME
step 2:
sqlplus / as sysdba
SQL> select insatnce_name,host,version FROM V$INSATNCE;--- IT SHOULD BE NEW
HOME
SQL> select comp_name,version,status from dba_registry-----to check for invalid objects.
SOME COMP_NAME LIKE OWB WILL BE OF OLD VERSION IT CAN BE DONE LATER
OLAP IS OFF IT IS OK.
SQL> select comp_name,version,status from dba_registry status='INVALID' OLAP WILL BE
OFF’;
SQL> select count(*) from dba_objects where status =’VALID’;
Check the count here and compare before the upgrade.
SQL> select banner from v$version;
BANNER ---------- it should show the 11g
Step 3: create Spfile from Pfile
Create a server parameter file with a initialization parameter file
SQL> create spfile from pfile;
This will create a spfile as a copy of the init.ora file located in %ORACLE_HOME%database
10. Start the Database with spfile:
SQL> shutdown immediate
SQL> startup
Check the 11g Alert log file for any Error. Database is ready to use now with Database Software
Oracle 11g.
SQL> show parameter spfile; -------------------- It should show the spfile
SQL> @?/rdbms/admin/utlu111s.sql
SQL> select count(*) from dba_objects wherestatus = ‘INVALID’;
SQL> @?/rdbms/admin/utlrp.sql
SQL> select count(*) from dba_objects where status = ‘INVALID’;
Backup upgraded (11g) database.
Create system statistics during a regular workload period -
otherwise non-appropriate values for the CBO will be used:
SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('start');
... – gather statistics while running a typical workload
SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('stop');
SQL> select pname NAME, pval1 VALUE, pval2 INFO
from aux_stats$;
NAME VALUE INFO
-------------------- ---------- ------------------------------
STATUS COMPLETED
DSTART 04-03-2009 12:30
DSTOP 05-03-2009 12:30
FLAGS 1
CPUSPEEDNW 1392.39
IOSEEKTIM 8.405
IOTFRSPEED 255945.605
...
Known Issues
errors after dbua and selecting the db
1.dbms_ldap package
2.gather stats
3.db has event or trace_event initialization parameters
ignore these errors
OLAP errors oralce analytical processing errors do come after summary these can be
ignored
11. Error ORA-06550 During Upgrade.
Description
Whilst running the upgrade you may encouner an ORA-06550 as detailled in Metalink
article 1066828.1.
Fix
This error can be ignored. However, you can avoid the error by applying patch 9315778 to
the 11.2 binaries prior to the upgrade starting.