The document provides steps for upgrading an Oracle database to version 10g Release 2. It details:
1) Running scripts that check the current database configuration and requirements for upgrade.
2) Making any necessary adjustments to parameters, tablespaces, redo logs.
3) Creating scripts to recreate database links if needing to downgrade.
4) Addressing issues with data types like timestamps with timezones and national character sets.
This presentation was presented at MUM Indonesia at Bali in 2008. Discussed about how to put extra layer of security into your MikroTik Router using Port Knocking mechanism.
HKG15-505: Power Management interactions with OP-TEE and Trusted FirmwareLinaro
HKG15-505: Power Management interactions with OP-TEE and Trusted Firmware
---------------------------------------------------
Speaker: Jorge Ramirez-Ortiz
Date: February 13, 2015
---------------------------------------------------
★ Session Summary ★
[Note: this is a joint Security/Power Management session) Understand what use cases related to Power Management have to interact with Trusted Firmware via Secure calls. Walk through some key use cases like CPU Suspend and explain how PM Linux drivers interacts with Trusted Firmware / PSCI
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250855
Video: https://www.youtube.com/watch?v=hQ2ITjHZY4s
Etherpad: http://pad.linaro.org/p/hkg15-505
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Splash screen for Embedded Linux 101: How to customize your boot sequencePierre-jean Texier
By definition: "A splash screen is a graphical control element consisting of a window containing an image, a logo, and the current version of the software". In the embedded world, it is not uncommon to see such a feature for products that have a panel display. This kind of integration provides a sense of responsiveness for the person (customer) which is in front of the product (smartphone, PC, fuel dispenser, internet box, ...) on power up. It is also interesting to display as soon as possible the vendor logo, custom logo, ... This talk covers the various splash screen options available when you have to design a product with Linux inside. From the bootloader to (early) userspace area, through Linux Kernel, we will discover how to integrate a custom splash screen.
This presentation was presented at MUM Indonesia at Bali in 2008. Discussed about how to put extra layer of security into your MikroTik Router using Port Knocking mechanism.
HKG15-505: Power Management interactions with OP-TEE and Trusted FirmwareLinaro
HKG15-505: Power Management interactions with OP-TEE and Trusted Firmware
---------------------------------------------------
Speaker: Jorge Ramirez-Ortiz
Date: February 13, 2015
---------------------------------------------------
★ Session Summary ★
[Note: this is a joint Security/Power Management session) Understand what use cases related to Power Management have to interact with Trusted Firmware via Secure calls. Walk through some key use cases like CPU Suspend and explain how PM Linux drivers interacts with Trusted Firmware / PSCI
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250855
Video: https://www.youtube.com/watch?v=hQ2ITjHZY4s
Etherpad: http://pad.linaro.org/p/hkg15-505
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Splash screen for Embedded Linux 101: How to customize your boot sequencePierre-jean Texier
By definition: "A splash screen is a graphical control element consisting of a window containing an image, a logo, and the current version of the software". In the embedded world, it is not uncommon to see such a feature for products that have a panel display. This kind of integration provides a sense of responsiveness for the person (customer) which is in front of the product (smartphone, PC, fuel dispenser, internet box, ...) on power up. It is also interesting to display as soon as possible the vendor logo, custom logo, ... This talk covers the various splash screen options available when you have to design a product with Linux inside. From the bootloader to (early) userspace area, through Linux Kernel, we will discover how to integrate a custom splash screen.
OpenWrt is a Linux distribution for embedded systems that runs on many routers and networking devices today. In this session we'll talk about OpenWrt's origins, architecture and get down to building apps for the platform.
Along the way we will touch on some basic firmware concepts and at last present the final working OpenWrt router and its capabilities.
Anton Lerner, Architect at Sitaro, computer geek, developer and occasional maker.
Sitaro provides total cyber protection for small business and home networks. Sitaro prevents massive scale IoT cyber attacks.
Find out more information in the meetup event page - https://www.meetup.com/Tel-Aviv-Yafo-Linux-Kernel-Meetup/events/245319189/
This presentation contains an overview of novelties in ARMv8-A and details on application binary interface (ABI), memory management unit (MMU), caches and interrupts.
This talk was held within GlobalLogic Lviv Embedded TechTalk on November 23d, 2017.
HKG15-107: ACPI Power Management on ARM64 Servers (v2)Linaro
HKG15-107: ACPI Power Management on ARM64 Servers
---------------------------------------------------
Speaker: Ashwin Chaugule
Date: February 9, 2015
---------------------------------------------------
★ Session Summary ★
Status of CPPC with runtime PM and discussion on idle PM with ACPI
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250767
Video: https://www.youtube.com/watch?v=eDDgYIkUHLI
Etherpad: http://pad.linaro.org/p/hkg15-107
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Building Mini Embedded Linux System for X86 ArchSherif Mousa
Full tutorial to learn how to build your own embedded Linux system as a MiniOS for your X86 device (PC ...).
It's considered a good start for anyone to get into the field of Embedded Linux building and development.
PostgreSQL - backup and recovery with large databasesFederico Campoli
Life on a rollercoaster, backup and recovery with large databases
Dealing with large databases is always a challenge.
The backups and the HA procedures evolve meanwhile the database installation grow up over the time.
The talk will cover the problems solved by the DBA in four years of working with large databases, which size increased from 1.7 TB single cluster, up to 40 TB in a multi shard environment.
The talk will cover either the disaster recovery with pg_dump and the high availability with the log shipping/streaming replication.
The presentation is based on a real story. The names are changed in order to protect the innocents.
OpenWrt is a Linux distribution for embedded systems that runs on many routers and networking devices today. In this session we'll talk about OpenWrt's origins, architecture and get down to building apps for the platform.
Along the way we will touch on some basic firmware concepts and at last present the final working OpenWrt router and its capabilities.
Anton Lerner, Architect at Sitaro, computer geek, developer and occasional maker.
Sitaro provides total cyber protection for small business and home networks. Sitaro prevents massive scale IoT cyber attacks.
Find out more information in the meetup event page - https://www.meetup.com/Tel-Aviv-Yafo-Linux-Kernel-Meetup/events/245319189/
This presentation contains an overview of novelties in ARMv8-A and details on application binary interface (ABI), memory management unit (MMU), caches and interrupts.
This talk was held within GlobalLogic Lviv Embedded TechTalk on November 23d, 2017.
HKG15-107: ACPI Power Management on ARM64 Servers (v2)Linaro
HKG15-107: ACPI Power Management on ARM64 Servers
---------------------------------------------------
Speaker: Ashwin Chaugule
Date: February 9, 2015
---------------------------------------------------
★ Session Summary ★
Status of CPPC with runtime PM and discussion on idle PM with ACPI
--------------------------------------------------
★ Resources ★
Pathable: https://hkg15.pathable.com/meetings/250767
Video: https://www.youtube.com/watch?v=eDDgYIkUHLI
Etherpad: http://pad.linaro.org/p/hkg15-107
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2015 - #HKG15
February 9-13th, 2015
Regal Airport Hotel Hong Kong Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
Building Mini Embedded Linux System for X86 ArchSherif Mousa
Full tutorial to learn how to build your own embedded Linux system as a MiniOS for your X86 device (PC ...).
It's considered a good start for anyone to get into the field of Embedded Linux building and development.
PostgreSQL - backup and recovery with large databasesFederico Campoli
Life on a rollercoaster, backup and recovery with large databases
Dealing with large databases is always a challenge.
The backups and the HA procedures evolve meanwhile the database installation grow up over the time.
The talk will cover the problems solved by the DBA in four years of working with large databases, which size increased from 1.7 TB single cluster, up to 40 TB in a multi shard environment.
The talk will cover either the disaster recovery with pg_dump and the high availability with the log shipping/streaming replication.
The presentation is based on a real story. The names are changed in order to protect the innocents.
12cR1 new features. I have tried to cover all new features of 12cR1 and many more may be missing. These are all my own views and do not necessarily reflect the views of Oracle. Requesting all visitors to comment on it to improve further.
Oracle Recovery Manager (Oracle RMAN) has evolved since being released in version 8i. With the newest version of Oracle 12c , RMAN has great new features that will allow you to reduce your down time in case of a disaster. In this session you will learn about the new features that were introduced in Oracle 12c and how can you take advantage of them from the first day you upgrade to this version.
Quickly learn how to drive patchVantage and understand the benefits using the presentation in conjunction with the AWS Cloud Instance. This is a real-time actual Oracle Database Administration session
So, you know how to deploy your code, what about your database? This talk will go through deploying your database with LiquiBase and DBDeploy a non-framework based approach to handling migrations of DDL and DML.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Steps for upgrading the database to 10g release 2
1. Steps for Upgrading the Database to 10g Release 2
Preparing to Upgrade
In this section all the steps need to be performed to the previous version of Oracle.
Please note that the database must be running in normal mode in the old release.
Step 1:
Log in to the system as the owner of the new 10gR2 ORACLE_HOME and copy the
following files from the 10gR2 ORACLE_HOME/rdbms/admin directory to a directory
outside of the Oracle home, such as the /tmp directory on your system:
ORACLE_HOME/rdbms/admin/utlu102i.sql
ORACLE_HOME/rdbms/admin/utltzuv2.sql
Make a note of the new location of these files.
Step 2:
Change to the temporary directory that you copied files to in Step 1.
Start SQL*Plus and connect to the database instance as a user with SYSDBA privileges.
Then run and spool the utlu102i.sql file.
sqlplus '/as sysdba'
SQL> spool Database_Info.log
SQL> @utlu102i.sql
SQL> spool off
Then, check the spool file and examine the output of the upgrade information tool.
The sections which follow, describe the output of the Upgrade Information Tool
(utlu102i.sql).
NOTE: If you are upgrading from 8.1.7.4, the utlu102i.sql script will fail with an ORA-
1403 error. Please follow the workaround in Note 5640527.8 (or Note 407031.1) to
enable utlu102i.sql to run. [Not to worry – as this has been fixed in 10.2.0.4 which we are
currently using]
Database:
This section displays global database information about the current database such as the database name,
release number, and compatibility level. A warning is displayed if the COMPATIBLE initialization parameter
needs to be adjusted before the database is upgraded.
Logfiles:
This section displays a list of redo log files in the current database whose size is less than 4 MB. For each
log file, the file name, group number, and recommended size is displayed. New files of at least 4 MB
(preferably 10 MB) need to be created in the current database. Any redo log files less than 4 MB must be
dropped before the database is upgraded.
2. Tablespaces:
This section displays a list of tablespaces in the current database. For each tablespace, the tablespace
name and minimum required size is displayed. In addition, a message is displayed if the tablespace is
adequate for the upgrade. If the tablespace does not have enough free space, then space must be added to
the tablespace in the current database. Tablespace adjustments need to be made before the database is
upgraded.
Update Parameters:
This section displays a list of initialization parameters in the parameter file of the current database that must
be adjusted before the database is upgraded. The adjustments need to be made to the parameter file after it
is copied to the new Oracle Database 10g release.
Deprecated Parameters:
This section displays a list of initialization parameters in the parameter file of the current database that are
deprecated in the new Oracle Database 10g release.
Obsolete Parameters:
This section displays a list of initialization parameters in the parameter file of the current database that are
obsolete in the new Oracle Database 10g release. Obsolete initialization parameters need to be removed
from the parameter file before the database is upgraded.
Components:
This section displays a list of database components in the new Oracle Database 10g release that will be
upgraded or installed when the current database is upgraded.
Miscellaneous Warnings:
This section provides warnings about specific situations that may require attention before and/or after the
upgrade.
SYSAUX Tablespace:
This section displays the minimum required size for the SYSAUX tablespace, which is required in Oracle
Database 10g. The SYSAUX tablespace must be created after the new Oracle Database 10g release is
started and BEFORE the upgrade scripts are invoked.
Step 3:
Check for the deprecated CONNECT Role
After upgrading to 10gR2, the CONNECT role will only have the CREATE SESSION
privilege; the other privileges granted to the CONNECT role in earlier releases will be
revoked during the upgrade.
To identify which users and roles in your database are granted the CONNECT role, use
the following query:
SELECT grantee FROM dba_role_privs
WHERE granted_role = 'CONNECT' and
grantee NOT IN ('SYS', 'OUTLN', 'SYSTEM', 'CTXSYS', 'DBSNMP',
'LOGSTDBY_ADMINISTRATOR', 'ORDSYS',
'ORDPLUGINS', 'OEM_MONITOR', 'WKSYS', 'WKPROXY',
'WK_TEST', 'WKUSER', 'MDSYS', 'LBACSYS', 'DMSYS','WMSYS', 'OLAPDB
3. A', 'OLAPSVR', 'OLAP_USER',
'OLAPSYS', 'EXFSYS', 'SYSMAN', 'MDDATA','SI_INFORMTN_SCHEMA', 'XD
B', 'ODM');
If users or roles require privileges other than CREATE SESSION, then grant the specific
required privileges prior to upgrading. The upgrade scripts adjust the privileges for the
Oracle-supplied users.
In Oracle 9.2.x and 10.1.x CONNECT role includes the following privileges:
SELECT GRANTEE,PRIVILEGE FROM DBA_SYS_PRIVS
WHERE GRANTEE='CONNECT'
GRANTEE PRIVILEGE
------------------------------ ---------------------------
CONNECT CREATE VIEW
CONNECT CREATE TABLE
CONNECT ALTER SESSION
CONNECT CREATE CLUSTER
CONNECT CREATE SESSION
CONNECT CREATE SYNONYM
CONNECT CREATE SEQUENCE
CONNECT CREATE DATABASE LINK
In Oracle 10.2 the CONNECT role only includes CREATE SESSION privilege.
Step 4:
Create the script for dblink incase of downgrade of the database.
During the upgrade to 10gR2, any passwords in database links will be encrypted. To
downgrade back to the original release, all of the database links with encrypted
passwords must be dropped prior to the downgrade. Consequently, the database links
will not exist in the downgraded database. If you anticipate a requirement to be able to
downgrade back to your original release, then save the information about affected
database links from the SYS.LINK$ table, so that you can recreate the database links
after the downgrade.
Following script can be used to construct the dblink.
SELECT
'create '||DECODE(U.NAME,'PUBLIC','public ')||'database link '||CHR(10)
||DECODE(U.NAME,'PUBLIC',Null, U.NAME||'.')|| L.NAME||chr(10)
||'connect to ' || L.USERID || ' identified by '''
||L.PASSWORD||''' using ''' || L.host || ''''
||chr(10)||';' TEXT
FROM sys.link$ L,
sys.user$ U
WHERE L.OWNER# = U.USER# ;
Step 5:
4. This step is only required for TIMESTAMP WITH TIMEZONE Datatype. Else skip to
STEP 6
Please this step is only required for the 10gR1 that may affect existing data of
TIMESTAMP WITH TIME ZONE datatype.
For example, if users enter TIMESTAMP '2003-02-17 09:00:00 America/Sao_Paulo', we
convert the data to UTC based on the transition rules in the time zone file and store them
on the disk. So '2003-02-17 11:00:00' along with the time zone id for
'America/Sao_Paulo' is stored because the offset for this particular time is '-02:00'. Now
the transition rules are modified and the offset for this particular time is changed to '-
03:00'. when users retrieve the data, they will get '2003-02-17 08:00:00
America/Sao_Paulo'. There is one hour difference compared to the original value.
Change to the temporary directory that you copied files to in Step 1.
Start SQL*Plus and connect to the database instance as a user with SYSDBA privileges.
Then run and spool the utltzuv2.sql file.
$ sqlplus '/as sysdba'
SQL> spool TimeZone_Info.log
SQL> @utltzuv2.sql
SQL> spool off
If the utltzuv2.sql script identifies columns with time zone data affected by a database
upgrade, then use the solution to solve this problem.
create tables with the time zone information in character format (for example,
TO_CHAR(column, 'YYYY-MM-DD HH24.MI.SSXFF TZR'), and recreate the
TIMESTAMP data from these tables after the upgrade.
For example, user scott has a table tztab:
create table tztab(x number primary key, y timestamp with time zone);
insert into tztab values(1, timestamp '');
Before upgrade, you can create a table tztab_back, note column y here is defined as
VARCHAR2 to preserve the original value.
create table tztab_back(x number primary key, y varchar2(256));
insert into tztab_back select x,
to_char(y, 'YYYY-MM-DD HH24.MI.SSXFF TZR') from tztab;
After upgrade, you need update the data in the table tztab using the value in tztab_back.
update tztab t set t.y = (select to_timestamp_tz(t1.y,
'YYYY-MM-DD HH24.MI.SSXFF TZR') from tztab_back t1 where t.x=t1.x);
Step 6:
Starting in Oracle 9i the National Characterset (NLS_NCHAR_CHARACTERSET) will
5. be limited to UTF8 and AL16UTF16. Any other NLS_NCHAR_CHARACTERSET will no
longer be supported.
For more details, refer to Note 276914.1 "The National Character Set in Oracle 9i and
10g"
NOTE: If you are upgrading from Oracle9i to 10g, skip to step 7.
When upgrading from Oracle8i to 10g the value of NLS_NCHAR_CHARACTERSET is
based on value currently used in the Oracle8i version.
If the NLS_NCHAR_CHARACTERSET is UTF8 then new it will stay UTF8. In all other
cases the NLS_NCHAR_CHARACTERSET is changed to AL16UTF16 and -if used- N-
type data (= data in columns using NCHAR, NVARCHAR2 or NCLOB ) may need to be
converted.
The change itself is done in step 38 by running the upgrade script.
To check whether there are any N-type objects in a database, run the following query:
select distinct OWNER, TABLE_NAME
from DBA_TAB_COLUMNS
where DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB')
and OWNER not in ('SYS','SYSTEM','XDB');
If no rows are returned it should mean that the database is not using N-type columns for
user data, so simply go to the next step.
If you have N-type columns for user data then run the following query:
SQL> select * from nls_database_parameters where parameter
='NLS_NCHAR_CHARACTERSET';
If you are using N-type columns AND your National Characterset is UTF8 or is in the
following list:
JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED
KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED
then also simply go to point next step. The conversion of the user data itself will then be
done in step 38
If you are using N-type columns AND your National Characterset is NOT UTF8 or NOT
in the following list:
JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED
KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED
6. (your current NLS_NCHAR_CHARACTERSET is for example US7ASCII,
WE8ISO8859P1, CL8MSWIN1251 ...)
then you have to:
• change the tables to use CHAR, VARCHAR2 or CLOB instead the N-type
or
• use export/import the table(s) containing N-type column and truncate those
tables before migrating to 10g
The recommended NLS_LANG during export is simply the NLS_CHARACTERSET, not
the NLS_NCHAR_CHARACTERSET
Step 7:
When upgrading to Oracle Database 10g, optimizer statistics are collected for dictionary
tables that lack statistics. This statistics collection can be time consuming for databases
with a large number of dictionary tables, but statistics gathering only occurs for those
tables that lack statistics or are significantly changed during the upgrade.
To decrease the amount of downtime incurred when collecting statistics, you can collect
statistics prior to performing the actual database upgrade.
As of Oracle Database 10g Release 10.1, Oracle recommends that you use the
DBMS_STATS.GATHER_DICTIONARY_STATS procedure to gather these statistics.
You can enter the following:
$ sqlplus '/as sysdba'
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
For Oracle8i and Oracle9i, use the DBMS_STATS.GATHER_SCHEMA_STATS
procedure to gather statistics.
Backup the existing statistics as follows:
$ sqlplus '/as sysdba'
SQL>spool sdict
SQL>grant analyze any to sys;
SQL>exec dbms_stats.create_stat_table('SYS','dictstattab');
SQL>exec dbms_stats.export_schema_stats('WMSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('MDSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('CTXSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('XDB','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('WKSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('LBACSYS','dictstattab',statown => 'SYS');
7. SQL>exec dbms_stats.export_schema_stats('OLAPSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('DMSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('ODM','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('ORDSYS','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('ORDPLUGINS','dictstattab',statown => 'S
YS');
SQL>exec dbms_stats.export_schema_stats('SI_INFORMTN_SCHEMA','dictstattab',st
atown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('OUTLN','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('DBSNMP','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('SYSTEM','dictstattab',statown => 'SYS');
SQL>exec dbms_stats.export_schema_stats('SYS','dictstattab',statown => 'SYS');
SQL>spool off
This data is useful if you want to revert back the statistics
For example, the following PL/SQL subprograms import the statistics for the SYS
schema after deleting the existing statistics:
exec dbms_stats.delete_schema_stats('SYS');
exec dbms_stats.import_schema_stats('SYS','dictstattab');
To gather statistics run this script, connect to the database AS SYSDBA using SQL*Plus.
$ sqlplus '/as sysdba'
SQL>spool gdict
SQL>grant analyze any to sys;
SQL>exec dbms_stats.gather_schema_stats('WMSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
-method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
EXEC DBMS_STATS.gather_schema_stats('MDSYS',DBMS_STATS.AUTO_SAMPLE_SIZE);
SQL>exec dbms_stats.gather_schema_stats('MDSYS', cascade => TRUE ,options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => 'FOR ALL COLUMNS SIZE AUTO' );
SQL>exec dbms_stats.gather_schema_stats('CTXSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('XDB',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('WKSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('LBACSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('OLAPSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('DMSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
8. - method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('ODM',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('ORDSYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('ORDPLUGINS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('SI_INFORMTN_SCHEMA',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('OUTLN',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('DBSNMP',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('SYSTEM',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats('SYS',options=>'GATHER',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
- method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
SQL>spool off
Step 8:
Check for invalid objects in the database:
SQL> col OBJECT format A30
SQL> col OWNER format A20
SQL> col type format A15
spool invalid_pre.lst
select substr(owner,1,12) owner,
substr(object_name,1,30) object,
substr(object_type,1,30) type, status from
dba_objects where status <> 'VALID';
spool off
Run the following script as a user with SYSDBA privs using SQL*Plus and then requery
invalid objects:
% sqlplus '/as sysdba'
SQL> @?/rdbms/admin/utlrp.sql
This last query will return a list of all objects that cannot be recompiled before the
upgrade in the file 'invalid_pre.lst'
9. If you are upgrading from Oracle9iR2 (9.2), verify that the view dba_registry contains
data. If the view is empty, run the following scripts from the 9.2 home:
% sqlplus '/as sysdba'
SQL> @?/rdbms/admin/catalog.sql
SQL> @?/rdbms/admin/catproc.sql
SQL> @?/rdbms/admin/utlrp.sql
and verify that the dba_registry view now contains data.
Step 9:
Check for corruption in the dictionary, use the following commands in sqlplus connected
as sys:
Set verify off
Set space 0
Set line 120
Set heading off
Set feedback off
Set pages 1000
Spool analyze.sql
Select 'Analyze cluster "'||cluster_name||'" validate structure cascade;'
from dba_clusters
where owner='SYS'
union
Select 'Analyze table "'||table_name||'" validate structure cascade;'
from dba_tables
where owner='SYS' and partitioned='NO' and (iot_type='IOT' or iot_type is NULL)
union
Select 'Analyze table "'||table_name||'" validate structure cascade into invalid_rows;'
from dba_tables
where owner='SYS' and partitioned='YES';
spool off
This creates a script called analyze.sql.
Now execute the following steps.
$ sqlplus '/as sysdba'
SQL> @$ORACLE_HOME/rdbms/admin/utlvalid.sql
SQL> @analyze.sql
This script (analyze.sql) should not return any errors.
Step 10:
Ensure that all Snapshot refreshes are successfully completed, and that replication is
stopped.
10. $ sqlplus '/ as sysdba'
SQL> select distinct(trunc(last_refresh)) from dba_snapshot_refresh_times;
Step 11:
Stop the listener for the database:
$ lsnrctl
LSNRCTL> stop
Ensure no files need media recovery:
$ sqlplus '/ as sysdba'
SQL> select * from v$recover_file;
This should return no rows.
Step 12:
Ensure no files are in backup mode:
SQL> select * from v$backup where status!='NOT ACTIVE';
This should return no rows.
Step 13:
Resolve any outstanding unresolved distributed transaction:
SQL> select * from dba_2pc_pending;
If this returns rows you should do the following:
SQL> select local_tran_id from dba_2pc_pending;
SQL> execute dbms_transaction.purge_lost_db_entry('');
SQL> commit;
Step 14:
Disable all batch and CRON jobs.
Step 15:
Ensure the users sys and system have 'system' as their default tablespace.
SQL> select username, default_tablespace from dba_users
where username in ('SYS','SYSTEM');
To modify use:
11. SQL> alter user sys default tablespace SYSTEM;
SQL> alter user system default tablespace SYSTEM;
Step 16:
Ensure that the aud$ is in the system tablespace when auditing is enabled.
SQL> select tablespace_name from dba_tables where table_name='AUD$';
Step 17:
Note down where all control files are located.
SQL> select * from v$controlfile;
Step 18:
If table XDB.MIGR9202STATUS exists in the database, drop it before upgrading the
database (to avoid the issue described in Note 356082.1)
Step 19:
Shutdown the database
$ sqlplus '/as sysdba'
SQL> shutdown immediate;
Step 20:
Perform a full cold backup:
1. If using SQL*BT, execute ora_adm_
2. (or an online backup using RMAN)
You can either do this by manually copying the files or sign on to RMAN:
$ rman "target / nocatalog"
And issue the following RMAN commands:
RUN
{
ALLOCATE CHANNEL chan_name TYPE DISK;
BACKUP DATABASE FORMAT 'some_backup_directory%U' TAG before_upgrade;
BACKUP CURRENT CONTROLFILE TO 'save_controlfile_location';
}
Upgrading to the New Oracle Database 10g Release 2
Step 21:
12. Update the init.ora file:
- Make a backup of the old init.ora file
- Copy it from the old (pre-10.2) ORACLE_HOME to the new (10.2) ORACLE_HOME
On Unix/Linux, the default location of the file is the $ORACLE_HOME/dbs directory
- Comment out any obsoleted parameters (listed in appendix A).
- Change all deprecated parameters (listed in appendix B).
- Set the COMPATIBLE initialization parameter to an appropriate value. If you are
upgrading from 8.1.7.4 then set the COMPATIBLE parameter to 9.2.0 until after the
upgrade has been completed successfully. If you are upgrading from 9.2.0 or 10.1.0
then leave the COMPATIBLE parameter set to it's current value until the upgrade
has been completed successfully. This will avoid any unnecessary ORA-942 errors
from being reported in SMON trace files during the upgrade (because the upgrade
is looking for 10.2 objects that have not yet been created)
- If you have the parameter NLS_LENGTH_SEMANTICS currently set to CHAR, change
the value (NOT Reqd)
to BYTE during the upgrade (to avoid the issue described in Note 4638550.8)
- Verify that the parameter DB_DOMAIN is set properly. OK
- Make sure the PGA_AGGREGATE_TARGET initialization parameter is set to
at least 24 MB. (pga_aggregate_target = 26214400)
- Ensure that the SHARED_POOL_SIZE and the LARGE_POOL_SIZE are at least
150Mb. shared_pool_size = 157286400
Please also the check the "KNOWN ISSUES" section
- Make sure the JAVA_POOL_SIZE initialization parameter is set to at least 150 MB.
java_pool_size = 157286400
- Ensure there is a value for DB_BLOCK_SIZE
- Comment out any existing AQ_TM_PROCESSES and JOB_QUEUE_PROCESSES
parameter settings, and add new lines in the init.ora/spfile.ora that explicitly set
AQ_TM_PROCESSES=0 and JOB_QUEUE_PROCESSES=0 for the duration of the
upgrade. The "startup upgrade" command (see step 30) should ensure that these settings
are used, but it's worth making sure.
- If you have defined an UNDO tablespace, set the parameter
UNDO_MANAGEMENT=AUTO (otherwise, either unset the parameter or explicitly set
it to MANUAL). See Note 135090.1 for further information about the Automatic Undo
Management feature.
- Make sure all path names in the parameter file are fully specified. You should not have
relative path names in the parameter file.
13. - If you are using a cluster database, set the parameter CLUSTER_DATABASE=FALSE
during the upgrade.
- If you are upgrading a cluster database, then modify the initdb_name.ora file in the
same way that you modified the parameter file.
Step 22 :
Check for adequate freespace on archive log destination file systems.
Step 23 :
Ensure the NLS_LANG variable is set correctly:
$ env | grep $NLS_LANG
Step 24:
If needed copy the SQL*Net files like (listener.ora, tnsnames.ora etc) to the new location
(when no TNS_ADMIN env. Parameter is used)
$ cp $OLD_ORACLE_HOME/network/admin/*.ora /network/admin
Step 25: [Applicable only for Windows O/s] Skip to Step 26
If your Operating system is Windows (NT, 2000, XP or 2003) delete your services With
the ORADIM of your old oracle version.
Stop the OracleServiceSID Oracle service of the database you are upgrading, where SID
is the instance name. For example, if your SID is ORCL, then enter the following at a
command prompt:
C:> NET STOP OracleServiceORCL
For Oracle 8.0 this is:
C:ORADIM80 -DELETE -SID
For Oracle 8i or higher this is:
C:ORADIM -DELETE -SID
Also create the new Oracle Database 10gR2 service at a command prompt using the
ORADIM command of the new Oracle Database release:
C:> ORADIM -NEW -SID SID -INTPWD PASSWORD -MAXUSERS USERS
-STARTMODE AUTO -PFILE ORACLE_HOMEDATABASEINITSID.ORA
Step 26:
Copy configuration files from the ORACLE_HOME of the database being upgraded to
the new Oracle Database 10g ORACLE_HOME:
14. If your parameter file resides within the old environment's ORACLE_HOME, then copy
it to the new ORACLE_HOME. By default, Oracle looks for the parameter file in
ORACLE_HOME/dbs on UNIX platforms and in ORACLE_HOMEdatabase
on Windows operating systems. The parameter file can reside anywhere you wish, but it
should not reside in the old environment's ORACLE_HOME after you upgrade to Oracle
Database 10g.
If your parameter file is a text-based initialization parameter file with either an IFILE
(include file) or a SPFILE (server parameter file) entry, and the file specified in the
IFILE or SPFILE entry resides within the old environment's ORACLE_HOME, then
copy the file specified by the IFILE or SPFILE entry to the new ORACLE_HOME. The
file specified in the IFILE or SPFILE entry contains additional initialization parameters.
If you have a password file that resides within the old environments ORACLE_HOME,
then move or copy the password file to the new Oracle Database 10g ORACLE_HOME.
The name and location of the password file are operating system-specific.
On UNIX platforms, the default password file is ORACLE_HOME/dbs/orapwsid.
If you are upgrading a cluster database and your initdb_name.ora file resides within the
old environment's ORACLE_HOME, then move or copy the initdb_name.ora file to the
new ORACLE_HOME.
Note:
If you are upgrading a cluster database, then perform this step on all nodes in which this
cluster database has instances configured.
Step 27:
Update the oratab entry, to set the new ORACLE_HOME and disable automatic startup:
SID:ORACLE_HOME:N
Step 28:
Update the environment variables like ORACLE_HOME and PATH
$. Oraenv
Step 29:
Make sure the following environment variables point to the new release (10g)
directories:
- ORACLE_HOME
- PATH
- ORA_NLS10
- ORACLE_BASE
- LD_LIBRARY_PATH
- LD_LIBRARY_PATH_64 (Solaris only)
15. - LIBPATH (AIX only)
- SHLIB_PATH (HPUX only)
- ORACLE_PATH
$ env | grep ORACLE_HOME
$ env | grep PATH
$ env | grep ORA_NLS10
$ env | grep ORACLE_BASE
$ env | grep LD_LIBRARY_PATH
$ env | grep ORACLE_PATH
Note that the ORA_NLS10 environment variable replaces the ORA_NLS33 environment
variable, so you should unset ORA_NLS33 and set ORA_NLS10.
As per Note 77442.1, you should set ORA_NLS10 to point to
$ORACLE_HOME/nls/data
Step 30:
Startup upgrade the database:
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus / as sysdba
Use Startup with the UPGRADE option:
SQL> startup upgrade
Step 31:
Create a SYSAUX tablespace. In Oracle Database 10g, the SYSAUX tablespace is used to
consolidate data from a number of tablespaces that were separate in previous releases.
The SYSAUX tablespace must be created with the following mandatory attributes:
- ONLINE
- PERMANENT
- READ WRITE
- EXTENT MANAGEMENT LOCAL
- SEGMENT SPACE MANAGEMENT AUTO
The Upgrade Information Tool(utlu102i.sql in step 4) provides an estimate of the
minimum required size for the SYSAUX tablespace in the SYSAUX Tablespace section.
The following SQL statement would create a 500 MB SYSAUX tablespace for the
database:
SQL> CREATE TABLESPACE sysaux DATAFILE 'sysaux01.dbf' SIZE 500M REUSE
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;
Step 32:
16. NOTE: Before performing the next action, disable any third party procedures that check
the complexity of schema passwords. During the upgrade, new schemas may be
created and these may initially have an insecure password (but only for a very short
period of time, because the SQL script that creates the new schema will then immediately
expire the password and lock the schema). If procedures are in place to enforce password
complexity, the "create user" statement may fail and cause configuration of a
component to fail.
Run the catupgrd.sql script, spooling the output so you can check whether any errors
occurred and investigate them:
SQL> spool upgrade.log
SQL> @catupgrd.sql
The catupgrd.sql script determines which upgrade scripts need to be run and then runs
each necessary script. You must run the script in the new release 10.2 environment.
The upgrade script creates and alters certain data dictionary tables.
It also upgrades and configures the following database components in the new release
10.2 database (if the components were installed in the database before the upgrade).
Oracle Database Catalog Views
Oracle Database Packages and Types
JServer JAVA Virtual Machine
Oracle Database Java Packages
Oracle XDK
Oracle Real Application Clusters
Oracle Workspace Manager
Oracle interMedia
Oracle XML Database
OLAP Analytic Workspace
Oracle OLAP API
OLAP Catalog
Oracle Text
Spatial
Oracle Data Mining
Oracle Label Security
Messaging Gateway
Expression Filter
Oracle Enterprise Manager Repository
Turn off the spooling of script results to the log file:
SQL> SPOOL OFF
Then, check the spool file and verify that the packages and procedures compiled
successfully. You named the spool file earlier in this step; the suggested name was
upgrade.log. Correct any problems you find in this file and rerun the appropriate
upgrade script if necessary. You can rerun any of the scripts described in this note as
many times as necessary.
17. Step 33:
Run utlu102s.sql, specifying the TEXT option:
SQL> @utlu102s.sql TEXT
This is the Post-upgrade Status Tool displays the status of the database components in
the upgraded database. The Upgrade Status Tool displays output similar to the
following:
Oracle Database 10.2 Upgrade Status Utility 04-20-2005 05:18:40
Component Status Version HH:MM:SS
Oracle Database Server VALID 10.2.0.1.0 00:11:37
JServer JAVA Virtual Machine VALID 10.2.0.1.0 00:02:47
Oracle XDK VALID 10.2.0.1.0 00:02:15
Oracle Database Java Packages VALID 10.2.0.1.0 00:00:48
Oracle Text VALID 10.2.0.1.0 00:00:28
Oracle XML Database VALID 10.2.0.1.0 00:01:27
Oracle Workspace Manager VALID 10.2.0.1.0 00:00:35
Oracle Data Mining VALID 10.2.0.1.0 00:15:56
Messaging Gateway VALID 10.2.0.1.0 00:00:11
OLAP Analytic Workspace VALID 10.2.0.1.0 00:00:28
OLAP Catalog VALID 10.2.0.1.0 00:00:59
Oracle OLAP API VALID 10.2.0.1.0 00:00:53
Oracle interMedia VALID 10.2.0.1.0 00:08:03
Spatial VALID 10.2.0.1.0 00:05:37
Oracle Ultra Search VALID 10.2.0.1.0 00:00:46
Oracle Label Security VALID 10.2.0.1.0 00:00:14
Oracle Expression Filter VALID 10.2.0.1.0 00:00:16
Oracle Enterprise Manager VALID 10.2.0.1.0 00:00:58
Note - in RAC environments, this script may suggest that the status of the RAC
component is INVALID when in actual fact it is VALID (as shown in the output from
DBA_REGISTRY)
NOTE: As per Note 456845.1, the output from the utlu102s.sql script may differ from
the output from DBA_REGISTRY. To check the current status of each component, run
the following SQL statement:
SQL> select comp_name, status, version from dba_registry;
Step 34:
Restart the database:
SQL> shutdown immediate (DO NOT use "shutdown abort" !!!)
SQL> startup restrict
Executing this clean shutdown flushes all caches, clears buffers and performs other
database housekeeping tasks.
18. This is needed if you want to upgrade specific components.
Step 35:
Run olstrig.sql to re-create DML triggers on tables with Oracle Label Security policies.
This step is only necessary if Oracle Label Security is in your database.
(Check from Step 33).
SQL> @olstrig.sql
Step 36:
Run utlrp.sql to recompile any remaining stored PL/SQL and Java code.
SQL> @utlrp.sql
Verify that all expected packages and classes are valid:
If there are still objects which are not valid after running the script run the following:
spool invalid_post.lst
Select substr(owner,1,12) owner,
substr(object_name,1,30) object,
substr(object_type,1,30) type, status
from
dba_objects where status <>'VALID';
spool off
Now compare the invalid objects in the file 'invalid_post.lst' with the invalid objects in
the file 'invalid_pre.lst' you create in step 9.
NOTE: If you have upgraded from version 9.2 to version 10.2 and find that the following
views are invalid, the views can be safely ignored (or dropped):
SYS.V_$KQRPD
SYS.V_$KQRSD
SYS.GV_$KQRPD
SYS.GV_$KQRSD
NOTE: If you have used OPatch to apply a CPU patch to the 10.2.0.x home, you now
need to follow the post-installation steps in the README file of the CPU patch to apply
the CPU patch to the upgraded database. This normally means running the catcpu.sql
script.
After Upgrading a Database
Step 37:
Shutdown the database and startup the database.
19. % sqlplus '/as sysdba'
SQL> shutdown
SQL> startup restrict
Step 38:
Complete the Step 38 only if you upgraded your database from release 8.1.7
Otherwise skip to Step 40.
A) If you are not using N-type columns for user data, ie. the query
select distinct OWNER, TABLE_NAME
from DBA_TAB_COLUMNS
where DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB')
and OWNER not in ('SYS','SYSTEM','XDB');
did not return rows in Step 6 of this note then:
% sqlplus '/as sysdba'
SQL> shutdown immediate
and go to step 40.
B) IF your version 8 NLS_NCHAR_CHARACTERSET was UTF8:
You can look up your previous NLS_NCHAR_CHARACTERSET using this select:
select * from nls_database_parameters where parameter
='NLS_SAVED_NCHAR_CS';
then:
% sqlplus '/as sysdba'
SQL> shutdown immediate
and go to step 40.
C) IF you are using N-type columns for *user* data *AND* your previous
NLS_NCHAR_CHARACTERSET was in the following list:
JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED
KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED
then the N-type columns *data* need to be converted to AL16UTF16:
To upgrade user tables with N-type columns to AL16UTF16 run the script utlnchar.sql:
% sqlplus '/as sysdba'
20. SQL> @utlnchar.sql
SQL> shutdown immediate;
go to step 40.
D) IF you are using N-type columns for *user* data *AND * your previous
NLS_NCHAR_CHARACTERSET was *NOT* in the following list:
JA16SJISFIXED , JA16EUCFIXED , JA16DBCSFIXED , ZHT32TRISFIXED
KO16KSC5601FIXED , KO16DBCSFIXED , US16TSTFIXED , ZHS16CGB231280FIXED
ZHS16GBKFIXED , ZHS16DBCSFIXED , ZHT16DBCSFIXED , ZHT16BIG5FIXED
ZHT32EUCFIXED
then import the data exported in point 8 of this note. The recommended NLS_LANG
during import is simply the NLS_CHARACTERSET, not the
NLS_NCHAR_CHARACTERSET
After the import:
% sqlplus '/as sysdba'
SQL> shutdown immediate;
go to step 40.
Step 39:
If your database has TIMESTAMP WITH TIMEZONE data, you must update the data so
that it is converted and stored based on the new time zone rules that come with the
upgrade. (Step 6).
If you used the export utility to export a copy of the affected tables, you should now use
the import utility to import your data from these tables back into your database. The
import utility will update the timestamp data as it imports.
If you used the manual script method, you will need to update the affected timestamp
data based on your backed up table. For example, if you previously backed up your table,
you need to run an update statement similar to the one below to update your timestamp
data.
UPDATE tztab t SET t.y =
(SELECT to_timestamp_tz(t1.y,'YYYY-MM-DD HH24.MI.SSXFF TZR')
FROM tztab_back t1
WHERE t.x=t1.x);
Step 40:
Now edit the init.ora:
- If you changed the value for NLS_LENGTH_SEMANTICS from CHAR to BYTE prior to
the upgrade (see step 21), set it back to CHAR. Otherwise, do not change the value of the
parameter to CHAR without careful evaluation and testing. Switching to CHAR
21. semantics can break application code. See Note 144808.1 for further information about
the usage of this parameter.
- If you changed the value for CLUSTER_DATABASE from TRUE to FALSE prior to the
upgrade, set it back to TRUE
Step 41:
Startup the database:
SQL> startup
Create a server parameter file with a initialization parameter file
SQL> create spfile from pfile;
This will create a spfile as a copy of the init.ora file located in the
$ORACLE_HOME/dbs directory.
Step 42:
Modify the listener.ora file:
For the upgraded intstance(s) modify the ORACLE_HOME parameter to point to the
new ORACLE_HOME.
Step 43:
Start the listener
$ lsnrctl
LSNRCTL> start
Step 44:
Enable cron and batch jobs
Step 45:
Change oratab entry to use automatic startup
SID:ORACLE_HOME:Y
Step 46: Only if using Oracle Cluster Registry [Else skip to Step 47]
Upgrade the Oracle Cluster Registry (OCR) Configuration.
If you are using Oracle Cluster Services, then you must upgrade the Oracle Cluster
Registry (OCR)keys for the database.
* Use srvconfig from the 10g ORACLE_HOME. For example:
% srvconfig -upgrade -dbname <db_name> -orahome <pre-10g_Oracle_home>
If the output from the $ORACLE_HOME/bin/ocrdump command references the pre-
10g home, it may be necessary to do the following:
22. From the pre-10g home, run the command:
% svrctl remove database -d <db_name>
From the 10g home, run the commands:
% srvctl add database -d <db_name> -o <10g_Oracle_home>
% srvctl add instance -d <db_name> -i <instance1_name> -n <node1>
% srvctl add instance -d <db_name> -i <instance2_name> -n <node2>
Step 47:
Use the DBMS_STATS package to gather new statistics for your user objects. Using
statistics collected from a previous Oracle version may lead CBO to generate less optimal
execution plans.
References:
Note 114671.1 "Gathering Statistics for the Cost Based Optimizer"
Note 262592.1 "How to tune your Database after Migration/Upgrade"
Step 48: Only for EM Grid control – Else you have completed the Upgrade to
10g
Enterprise Manager Grid Control (EMGC) will show that the upgraded database does not
have an inventory. To re-discover the database, do the following:
1. Go to EMGC => Targets => Databases
2. Select the upgraded database and remove it
3. Click "Add", enter the name of the host and click "Continue" to allow EMGC to re-
discover
the database in the correct home with the correct inventory
Useful Hints
** Upgrading With Read-Only and Offline Tablespaces
The Oracle database can read file headers created prior to Oracle 10g, so you do not need
to do anything to them during the upgrade. The only exception to this is if you want to
transport tablespaces created prior to Oracle 10g, to another platform. In this case, the
file headers must be made read-write at some point before the transport. However, there
are no special actions required on them during the upgrade.
The file headers of offline datafiles are updated later when they are brought online, and
the file headers of read-only tablespaces are updated if and when they are made read-
write sometime after the upgrade. In any other circumstance, read-only tablespaces
never have to be made read-write.
23. It is a good idea to OFFLINE NORMAL all tablespaces except for SYSTEM and those
containing rollback/UNDO tablespace prior to migration. This way if migration fails only
the SYSTEM and rollback datafiles need to be restored rather than the entire database.
Note: You must OFFLINE the TABLESPACE as migrate does not allow OFFLINE files in
an ONLINE tablespace.
Note: If you are upgrading from Oracle9i, the CWMLITE tablespace (which contains
OLAP objects) will need to be ONLINE during the upgrade (so that the OLAP objects can
be upgraded to 10g and moved to the SYSAUX tablespace)
** Converting Databases to 64-bit Oracle Database Software
If you are installing 64-bit Oracle Database 10g software but were previously using a 32-
bit Oracle Database installation, then the databases will automatically be converted to
64-bit during the upgrade to Oracle Database 10g except when upgrading from Release 1
(10.1) to Release 2 (10.2).
The process is not automatic for the release 1 to release 2 upgrade, but is automatic for
all other upgrades. This is because the utlip.sql script is not run during the release 1 to
release 2 upgrade to invalid all PL/SQL objects. You must run the utlip.sql script as the
last step in the release 10.1 environment, before upgrading to release 10.2.
** If error occurs while executing the catupgrd.sql
If an error occurs during the running of the catupgrd.sql script, once the problem is fixed
you can simply rerun the catupgrd.sql script to finish the upgrade process and complete
the
the upgrade process.
Appendix A: Initialization Parameters Obsolete in 10g
ENQUEUE_RESOURCES
DBLINK_ENCRYPT_LOGIN
HASH_JOIN_ENABLED
LOG_PARALLELISM
MAX_ROLLBACK_SEGMENTS
MTS_CIRCUITS
MTS_DISPATCHERS
MTS_LISTENER_ADDRESS
MTS_MAX_DISPATCHERS
MTS_MAX_SERVERS
MTS_MULTIPLE_LISTENERS
MTS_SERVERS
MTS_SERVICE
MTS_SESSIONS
OPTIMIZER_MAX_PERMUTATIONS
ORACLE_TRACE_COLLECTION_NAME
ORACLE_TRACE_COLLECTION_PATH
ORACLE_TRACE_COLLECTION_SIZE
ORACLE_TRACE_ENABLE
24. ORACLE_TRACE_FACILITY_NAME
ORACLE_TRACE_FACILITY_PATH
PARTITION_VIEW_ENABLED
PLSQL_NATIVE_C_COMPILER
PLSQL_NATIVE_LINKER
PLSQL_NATIVE_MAKE_FILE_NAME
PLSQL_NATIVE_MAKE_UTILITY
ROW_LOCKING
SERIALIZABLE
TRANSACTION_AUDITING
UNDO_SUPPRESS_ERRORS
Appendix B: Initialization Parameters Deprecated in 10g
LOGMNR_MAX_PERSISTENT_SESSIONS
MAX_COMMIT_PROPAGATION_DELAY
REMOTE_ARCHIVE_ENABLE
SERIAL_REUSE
SQL_TRACE
BUFFER_POOL_KEEP (replaced by DB_KEEP_CACHE_SIZE)
BUFFER_POOL_RECYCLE (replaced by DB_RECYCLE_CACHE_SIZE)
GLOBAL_CONTEXT_POOL_SIZE
LOCK_NAME_SPACE
LOG_ARCHIVE_START
MAX_ENABLED_ROLES
PARALLEL_AUTOMATIC_TUNING
PLSQL_COMPILER_FLAGS (replaced by PLSQL_CODE_TYPE and PLSQL_DEBUG)
Known issues
1) While doing a upgrade from 9iR2 to 10.2.0.X.X, on running the utlu102i.sql script as
directed in step 2
Its output informs to add streams_pool_size=50331648 to the init.ora file. While adding
the parameter Oracle gives streams_pool_size as invalid parameter.
STREAMS_POOL_SIZE, was introduced in release 10gR1. This message may be ignored
for database version 9iR2 or less
2) One of the customer has reported on keeping the shared_pool_size at 150 MB,
catmeta.sql fails with insuffient shared memory during the processing of view
KU$_PHFTABLE_VI.
Please set the shared_pool_size at 200M.
3) While upgrade following error was encountered.
create or replace
*
ERROR at line 1:
ORA-06553: PLS-213: package STANDARD not accessible.
ORA-00955: name is already used by an existing object
25. Please make sure to set the following init parameters as below in the spfile/init file or
comment them out to their default values, at the time of upgrading the database.
PLSQL_V2_COMPATIBILITY = FALSE
PLSQL_CODE_TYPE = INTERPRETED # Only applicable to 10gR1
PLSQL_NATIVE_LIBRARY_DIR = ""
PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT = 0
Refer to Note 170282.1 PLSQL_V2_COMPATIBLITY=TRUE causes STANDARD and
DBMS_STANDARD to Error at Compile
@
Always disconnect from the session which issues the STARTUP and connect as a fresh
session before doing any further SQL. eg: On upgrade to 10.2 startup the instance with
the upgrade option, exit sqlplus , reconnect a fresh SQLPLUS session as SYSDBA and
then run the upgrade scripts.
Revision History
Support have been asked to include this new section in the note. It is not possible to
provide a completely accurate revision history because many changes have been made
since the note was first created in 2005 but, now that this section exists, Support will
keep it up-to-date.
18-JUL-2005
Article created
31-JUL-2005
Article published
24-JAN-2007
- Explicitly set AQ_TM_PROCESSES=0 in init.ora (step 21)
29-JAN-2007
- V_$ and GV_$ views can be dropped (step 36)
03-DEC-2007
- Drop table XDB.MIGR9202STATUS from the OLD home (step 18)
- Full cold backup OR an online backup using RMAN (step 20)
05-FEB-2008
- Added reference to Note 406472.1 in the list of prerequisites
- N-type columns in tables owned by XDB can be ignored (step 6)
26. - Add workaround to ORA-1403 from utlu102i.sql (step 2)
- Added reference to Note 471479.1 in the list of prerequisites
27-FEB-2008
- Added some further commands to step 46
- Added a step about gathering new statistics (step 47)
- Added a reference to Note 407031.1 in step 2
- Added advice regarding ORA_NLS10 (step 29)
- Skip step 6 if upgrading from 9.x to 10.2
- Keep CWMLITE tablespace online (useful hints)
- Check that DBA_REGISTRY contains data (step 8)
- Added reference to Note 465951.1 in the list of prerequisites
- Use GATHER_SCHEMA_STATS in 8i and 9i (step 7)
18-APR-2008
- Added this “Revision History” section to the note
- Clarified when to set UNDO_MANAGEMENT=AUTO in step 21
- Added reference to Note 135090.1 in step 21
- Added reference to Note 293658.1 in the list of prerequisites
- Added reference to Note 316900.1 in the list of prerequisites
- Added reference to Note 466181.1 in the list of prerequisites
- Added reference to Note 557242.1 in the list of prerequisites
- Added some info to step 36 about running catcpu.sql if a CPU patch is applied to the
home
- Explicitly set JOB_QUEUE_PROCESSES=0 in init.ora (step 21)
- Added a step about discovering the upgraded database in EMGC (step 48)
21-APR-2008
- Added a note suggesting that password complexity checking procedures are disabled
(step 32)
- Added a warning about using NLS_LENGTH_SEMANTICS=CHAR (step 40)
29-SEP-2008
- Added reference to Note 565600.1 in the list of prerequisites
- Added reference to Note 603714.1 in the list of prerequisites
- Added reference to Note 456845.1 in step 33
- Clarified step 21
References
Note 135090.1 - Managing Rollback/Undo Segments in AUM (Automatic Undo
Management)
Note 159657.1 - Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to
Oracle9iR2 (9.2.0)
Note 170282.1 - PLSQL_V2_COMPATIBLITY=TRUE causes STANDARD and
DBMS_STANDARD to Error at Compile
Note 263809.1 - Complete checklist for manual upgrades to 10gR1 (10.1.0.x)
27. Note 293658.1 - 10.1 or 10.2 Patchset Install Getting ORA-29558 JAccelerator (NCOMP)
And ORA-06512
Note 316900.1 - ALERT: Oracle 10g Release 2 (10.2) Support Status and Alerts
Note 356082.1 - ORA-7445 [qmeLoadMetadata()+452] During 10.1 to 10.2 Upgrade
Note 406472.1 - Mandatory Patch 5752399 for 10.2.0.3 on Solaris 64-bit and Filesystems
Managed By Veritas or Solstice Disk Suite software
Note 407031.1 - ORA-01403 no data found while running utlu102i.sql/utlu102x.sql on
8174 database
Note 412271.1 - ORA-600 [22635] and ORA-600 [KOKEIIX1] Reported While
Upgrading Or Patching Databases To 10.2.0.3
Note 456845.1 - UTLU102S.SQL May Show Different Results Than Select From
DBA_REGISTRY
Note 465951.1 - ORA-600 [kcbvmap_1] or Ora-600 [Kcliarq_2] On Startup Upgrade
Moving From a 32-Bit To 64-Bit Release
Note 466181.1 - 10g Upgrade Companion
Note 471479.1 - IOT Corruptions After Upgrade from COMPATIBLE <= 9.2 to
COMPATIBLE >= 10.1
Note 557242.1 - Upgrade Gives Ora-29558 Error Despite of JAccelerator Has Been
Installed
Note 565600.1 - ERROR IN CATUPGRD: ORA-00904 IN DBMS_SQLPA
Note 603714.1 - 10.2.0.4 Catupgrd.sql Fails With ORA-03113 Creating
SYS.KU$_XMLSCHEMA_VIEW
Oracle Database Upgrade Guide 10g Release 2 (10.2) Part Number B14238-01
http://download.oracle.com/docs/cd/B19306_01/server.102/b14238/toc.htm
Keywords
UPGRADE~TO~10GR2 ; NCOMP ; UPGRADE~FROM~8.1.7 ; UPGRADE~FROM~9.2.0
; UPGRADE~TO~10.2.0 ; UPGRADE~FROM~8.1.7.4 ; UPGRADE~TO~10GR2 ;