1. The document describes steps taken to recover a tablespace in an Oracle database where the backup of one datafile does not exist.
2. A new datafile is added to the tablespace and then deleted at the OS level to simulate the scenario.
3. RMAN is used to restore the tablespace, and it automatically recreates the missing datafile during the restore process.
4. Finally, recovery is performed on the tablespace to recover it to the current state.
The document describes the steps to move an Oracle 12c database from a non-ASM storage to ASM storage. It involves:
1. Checking the current database files and parameters.
2. Creating required directories in ASM for datafiles, controlfiles, online logs etc.
3. Configuring the fast recovery area.
4. Backing up the database files and control file, copying them to ASM, and switching to the copies.
5. Adding new online redo logs to ASM and dropping the old logs.
6. Adding a new tempfile to ASM and dropping the old tempfile.
7. Creating a new SPFILE in ASM.
The document describes steps to identify and repair a block corruption in an Oracle database:
1. Use RMAN's Data Recovery Advisor to list, analyze, and repair the corruption. It identified a corrupted block in the USERS tablespace datafile and recommended restoring it from backup with block media recovery.
2. Verify the corruption using DBVERIFY and validate the tablespace with RMAN backup. Both tools confirmed the single corrupted block.
3. Restore the corrupted block using RMAN block recovery to fix the issue, and revalidate that the tablespace is no longer corrupted.
Rman cloning when both directory and db name are same.subhani shaik
1. The document describes steps to duplicate a database where the source and destination database have the same name. It involves taking a backup of the source database, copying files to the destination, and using RMAN to restore and recover the database.
2. Key steps include making the directory structure the same on source and destination, starting the destination database in nomount mode, restoring the control file and datafiles, recovering changes, and opening the database.
3. The destination database is verified by checking the locations of datafiles, control files and redo logs.
Rman cloning when both directory and db name are same.subhani shaik
1. The document describes steps to duplicate a database where the source and destination database have the same name. It involves taking a backup of the source database, copying files to the destination, and using RMAN to restore and recover the database.
2. Key steps include making the directory structures the same on source and destination, starting the destination database in nomount mode, restoring the control file and datafiles, recovering changes, and opening the database.
3. The destination database is verified by checking the locations of datafiles, control files and redo logs.
The document describes two examples of using PRM (Physical Recovery Manager) to recover damaged Oracle data tables without backups.
In the first example, a segment header in a data file is physically damaged, making the table unable to be read. PRM is able to correctly read the table data after loading the damaged data file.
In the second example, a data file is taken offline without archiving enabled, and the redo logs are overwritten. Conventional recovery methods cannot bring the data file back online. PRM can recover all the data from the unrecoverable data file by loading it in dictionary mode.
1) The document describes the steps to change the database name from ANAR_F to ANAR_F1. This involves using the nid tool to change the database ID and name, updating the parameter file, and restarting the database with RESETLOGS.
2) The nid tool is run to change the database ID and name in the control files and datafiles.
3) The parameter file is updated by changing the DB_NAME parameter and the database is then restarted with RESETLOGS to complete the name change.
The document outlines the steps to decommission an Oracle database from a 2-node RAC cluster. The process involves: verifying backups, blocking the database in EM, shutting down the database on both nodes, mounting the database in restricted mode on one node, dropping the database, removing the database and instance configuration, and deleting the services in the cluster.
The document discusses setting up an Oracle 12c Active Data Guard physical standby database using RMAN DUPLICATE FROM ACTIVE. It involves 3 steps:
1) Configuring the primary and standby databases, including creating required directories, adding static entries to listener.ora, and editing tnsnames.ora.
2) Running RMAN DUPLICATE FROM ACTIVE on the primary to create the standby database instance while it is in NOMOUNT mode.
3) After duplicate completes, configuring redo transport on both primary and standby, adding standby redo logs, and opening the standby database to start managed recovery.
The document describes the steps to move an Oracle 12c database from a non-ASM storage to ASM storage. It involves:
1. Checking the current database files and parameters.
2. Creating required directories in ASM for datafiles, controlfiles, online logs etc.
3. Configuring the fast recovery area.
4. Backing up the database files and control file, copying them to ASM, and switching to the copies.
5. Adding new online redo logs to ASM and dropping the old logs.
6. Adding a new tempfile to ASM and dropping the old tempfile.
7. Creating a new SPFILE in ASM.
The document describes steps to identify and repair a block corruption in an Oracle database:
1. Use RMAN's Data Recovery Advisor to list, analyze, and repair the corruption. It identified a corrupted block in the USERS tablespace datafile and recommended restoring it from backup with block media recovery.
2. Verify the corruption using DBVERIFY and validate the tablespace with RMAN backup. Both tools confirmed the single corrupted block.
3. Restore the corrupted block using RMAN block recovery to fix the issue, and revalidate that the tablespace is no longer corrupted.
Rman cloning when both directory and db name are same.subhani shaik
1. The document describes steps to duplicate a database where the source and destination database have the same name. It involves taking a backup of the source database, copying files to the destination, and using RMAN to restore and recover the database.
2. Key steps include making the directory structure the same on source and destination, starting the destination database in nomount mode, restoring the control file and datafiles, recovering changes, and opening the database.
3. The destination database is verified by checking the locations of datafiles, control files and redo logs.
Rman cloning when both directory and db name are same.subhani shaik
1. The document describes steps to duplicate a database where the source and destination database have the same name. It involves taking a backup of the source database, copying files to the destination, and using RMAN to restore and recover the database.
2. Key steps include making the directory structures the same on source and destination, starting the destination database in nomount mode, restoring the control file and datafiles, recovering changes, and opening the database.
3. The destination database is verified by checking the locations of datafiles, control files and redo logs.
The document describes two examples of using PRM (Physical Recovery Manager) to recover damaged Oracle data tables without backups.
In the first example, a segment header in a data file is physically damaged, making the table unable to be read. PRM is able to correctly read the table data after loading the damaged data file.
In the second example, a data file is taken offline without archiving enabled, and the redo logs are overwritten. Conventional recovery methods cannot bring the data file back online. PRM can recover all the data from the unrecoverable data file by loading it in dictionary mode.
1) The document describes the steps to change the database name from ANAR_F to ANAR_F1. This involves using the nid tool to change the database ID and name, updating the parameter file, and restarting the database with RESETLOGS.
2) The nid tool is run to change the database ID and name in the control files and datafiles.
3) The parameter file is updated by changing the DB_NAME parameter and the database is then restarted with RESETLOGS to complete the name change.
The document outlines the steps to decommission an Oracle database from a 2-node RAC cluster. The process involves: verifying backups, blocking the database in EM, shutting down the database on both nodes, mounting the database in restricted mode on one node, dropping the database, removing the database and instance configuration, and deleting the services in the cluster.
The document discusses setting up an Oracle 12c Active Data Guard physical standby database using RMAN DUPLICATE FROM ACTIVE. It involves 3 steps:
1) Configuring the primary and standby databases, including creating required directories, adding static entries to listener.ora, and editing tnsnames.ora.
2) Running RMAN DUPLICATE FROM ACTIVE on the primary to create the standby database instance while it is in NOMOUNT mode.
3) After duplicate completes, configuring redo transport on both primary and standby, adding standby redo logs, and opening the standby database to start managed recovery.
This document provides an overview of managing the Oracle database instance. It covers starting and stopping the Oracle database and components using Oracle Enterprise Manager and SQL*Plus. It describes accessing databases with SQL*Plus and modifying initialization parameters. It also discusses the stages of database startup, shutdown options, viewing the alert log, and accessing dynamic performance views.
This document is a tutorial on managing pluggable databases in Oracle 12c. It discusses how to rename, manage, and drop pluggable databases. It also covers security topics like common vs local users and roles, and how privileges are handled between the CDB root and pluggable databases. The tutorial demonstrates renaming a pluggable database called "TEST" to "new", managing tablespaces and datafiles between the root and pluggable databases, and creating both common and local users and roles.
Hello Everyone ! Hope everybody doing good in their work and with their busy life.
Today i am listing down some interesting ORA- errors which i found recently as a Beginner, My Good Luck i have solved those too. So, here i am Listing down the errors with solutions.
It happens when you work with oracle, you may face or might be facing.
So, guys ! Be fearless. Have a look over it. If you need any help, Please Please let me know..
Thankyou.
The document summarizes new features in Oracle Database 12c Recovery Manager (RMAN). Key points include: RMAN now supports pluggable databases and allows point-in-time recovery of individual pluggable databases. It also enables running SQL statements and recovering individual tables from backups. Active duplicate operations in RMAN utilize backup sets for more efficient cross-platform restores of databases.
This document outlines the steps to upgrade an Oracle database from version 11.2.0.4 to 12c. It includes prechecks such as validating objects, checking for duplicate objects, gathering statistics. It also details backup procedures like enabling flashback and creating a restore point. The key steps are running the preupgrade tool, disabling jobs and scripts, validating tablespaces and removing the EM repository before initiating the upgrade using DBUA.
The document describes the steps to migrate an Oracle database from a file system to ASM (Oracle Automatic Storage Management). The key steps are:
1. Configure the flash recovery area and migrate datafiles, control files, redo logs and the spfile to ASM disk groups.
2. Use RMAN (Recovery Manager) to backup and copy the database files to ASM.
3. Update database configuration files like the control file to point to the new ASM locations.
Once complete, the Oracle database is fully migrated and using ASM for storage.
12c database migration from ASM storage to NON-ASM storageMonowar Mukul
1. The document describes the process of migrating a database from ASM to non-ASM storage. This involves taking backups, changing initialization parameters, creating new datafiles and redo logs in non-ASM locations, mounting and opening the database.
2. Key steps include taking an ASM backup, creating a pfile with new datafile and logfile locations, restoring the controlfile, copying datafiles to the new locations, renaming datafiles, and adding new redo logs.
3. After completing these steps, the database is successfully migrated from ASM to non-ASM storage, with the datafiles and redo logs now residing in normal filesystem locations instead of ASM.
The document discusses configuring and managing a read-only standby Oracle 8i database. It describes standby database basics, new Oracle 8i capabilities for standby databases, and how to set up a standby database including initial configuration, starting up the standby instance, and ongoing operational issues. It also provides an example of recovering a standby database after a primary database change and the resulting state of data on the standby.
This document provides a tutorial on managing pluggable databases in Oracle 12c. It discusses how to check the container name and ID, view pluggable databases, create a new pluggable database called TEST, open and close pluggable databases, and manage tablespaces within pluggable databases. The key steps covered are using SQL commands like SELECT, ALTER, and CREATE to work with the container database, pluggable databases, tablespaces, and data files.
This document provides instructions for setting up a physical standby database for an Oracle E-Business Suite Release 12.2 database using Oracle 11gR2. It describes configuring the primary database for archiving and adding standby redo logs. It also covers copying the Oracle home to the standby server, modifying initialization parameters, and using RMAN to duplicate the primary database and recover it as a physical standby. Key steps include enabling archive logging on the primary, setting the log archive destination, and starting redo transport services to ship archived logs to the standby.
1) Oracle 10g introduces flashback query which allows users to query past states of data within a specified time period by accessing the undo logs.
2) Flashback table allows users to recover accidentally dropped tables from the recycle bin.
3) Rollback monitoring provides estimated time to complete long running transactions such as rollbacks.
OTN TOUR 2016 - DBA Commands and Concepts That Every Developer Should KnowAlex Zaballa
The document provides an overview of DBA commands and concepts that every developer should know. It includes sections on availability of Oracle Database 12c, parallel queries, row chaining and migration, explain plans, Oracle Flashback Query and Table, schema management, rollbacks, pending statistics, bulk processing vs row-by-row, Virtual Private Database, extended data types, SQL text expansion, identity columns, and virtual columns. The presentation aims to help developers better understand database administration tasks and functionality.
OTN TOUR 2016 - DBA Commands and Concepts That Every Developer Should KnowAlex Zaballa
This document contains a summary of an Oracle DBA presentation on DBA commands and concepts that every developer should know. The presentation covered topics such as parallel queries, row chaining, explain plans, flashback queries, pending statistics, bulk processing, virtual private databases, extended data types, identity columns, and online table redefinition. It provided examples and demonstrations of many of these commands and concepts.
1. The document discusses database administration topics like starting and stopping the database control, configuring database instances, parameter files, and using SQL*Plus to view parameters.
2. It provides examples of initializing parameter values both statically in parameter files and dynamically using ALTER SESSION and ALTER SYSTEM.
3. The document also covers starting up and shutting down database instances using options like NOMOUNT, MOUNT, OPEN, and SHUTDOWN.
This document summarizes the key aspects of configuring and using Oracle Dataguard for disaster recovery. It discusses setting up a physical standby database, monitoring the replication process, and utilizing the standby for tasks like reporting and testing. Switching the primary and standby roles is also covered.
The document provides steps for cloning an Oracle EBS R12 environment from a source (PROD) system to a target (TEST) system. Key steps include:
1. Backing up files and databases from the source including applications files, database parameter files, and database backups.
2. Copying the backed up files to the target system and modifying configuration files to point to the target system.
3. Restoring and recovering the database on the target system using RMAN and modifying datafile names.
4. Running scripts to clone the application tier files and configure the applications.
5. Performing post-clone tasks like dropping and recreating temp tablespaces and cleaning up configuration.
T3 is an optimized protocol used to transport data between WebLogic Server and other Java programs. WebLogic Server tracks each Java Virtual Machine (JVM) it connects to and creates a single T3 connection to carry all traffic for a JVM. For example, if a client accesses an enterprise bean and JDBC connection pool on WebLogic Server, a single network connection is established between the WebLogic Server JVM and the client JVM.
Install and upgrade Oracle grid infrastructure 12.1.0.2Biju Thomas
1) The document describes upgrading an Oracle Grid Infrastructure installation from 12.1.0.1 to 12.1.0.2. There were issues with the rootupgrade script, but moving the ASM SPFILE location resolved it.
2) Key steps included running the Grid Infrastructure 12.1.0.2 installation, applying the PSU patch 19791375, and verifying the services started up successfully from the new Oracle home.
3) Applying the latest OPatch version 12.1.0.1.5 prior to installing the PSU is also documented.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
This document provides an overview of managing the Oracle database instance. It covers starting and stopping the Oracle database and components using Oracle Enterprise Manager and SQL*Plus. It describes accessing databases with SQL*Plus and modifying initialization parameters. It also discusses the stages of database startup, shutdown options, viewing the alert log, and accessing dynamic performance views.
This document is a tutorial on managing pluggable databases in Oracle 12c. It discusses how to rename, manage, and drop pluggable databases. It also covers security topics like common vs local users and roles, and how privileges are handled between the CDB root and pluggable databases. The tutorial demonstrates renaming a pluggable database called "TEST" to "new", managing tablespaces and datafiles between the root and pluggable databases, and creating both common and local users and roles.
Hello Everyone ! Hope everybody doing good in their work and with their busy life.
Today i am listing down some interesting ORA- errors which i found recently as a Beginner, My Good Luck i have solved those too. So, here i am Listing down the errors with solutions.
It happens when you work with oracle, you may face or might be facing.
So, guys ! Be fearless. Have a look over it. If you need any help, Please Please let me know..
Thankyou.
The document summarizes new features in Oracle Database 12c Recovery Manager (RMAN). Key points include: RMAN now supports pluggable databases and allows point-in-time recovery of individual pluggable databases. It also enables running SQL statements and recovering individual tables from backups. Active duplicate operations in RMAN utilize backup sets for more efficient cross-platform restores of databases.
This document outlines the steps to upgrade an Oracle database from version 11.2.0.4 to 12c. It includes prechecks such as validating objects, checking for duplicate objects, gathering statistics. It also details backup procedures like enabling flashback and creating a restore point. The key steps are running the preupgrade tool, disabling jobs and scripts, validating tablespaces and removing the EM repository before initiating the upgrade using DBUA.
The document describes the steps to migrate an Oracle database from a file system to ASM (Oracle Automatic Storage Management). The key steps are:
1. Configure the flash recovery area and migrate datafiles, control files, redo logs and the spfile to ASM disk groups.
2. Use RMAN (Recovery Manager) to backup and copy the database files to ASM.
3. Update database configuration files like the control file to point to the new ASM locations.
Once complete, the Oracle database is fully migrated and using ASM for storage.
12c database migration from ASM storage to NON-ASM storageMonowar Mukul
1. The document describes the process of migrating a database from ASM to non-ASM storage. This involves taking backups, changing initialization parameters, creating new datafiles and redo logs in non-ASM locations, mounting and opening the database.
2. Key steps include taking an ASM backup, creating a pfile with new datafile and logfile locations, restoring the controlfile, copying datafiles to the new locations, renaming datafiles, and adding new redo logs.
3. After completing these steps, the database is successfully migrated from ASM to non-ASM storage, with the datafiles and redo logs now residing in normal filesystem locations instead of ASM.
The document discusses configuring and managing a read-only standby Oracle 8i database. It describes standby database basics, new Oracle 8i capabilities for standby databases, and how to set up a standby database including initial configuration, starting up the standby instance, and ongoing operational issues. It also provides an example of recovering a standby database after a primary database change and the resulting state of data on the standby.
This document provides a tutorial on managing pluggable databases in Oracle 12c. It discusses how to check the container name and ID, view pluggable databases, create a new pluggable database called TEST, open and close pluggable databases, and manage tablespaces within pluggable databases. The key steps covered are using SQL commands like SELECT, ALTER, and CREATE to work with the container database, pluggable databases, tablespaces, and data files.
This document provides instructions for setting up a physical standby database for an Oracle E-Business Suite Release 12.2 database using Oracle 11gR2. It describes configuring the primary database for archiving and adding standby redo logs. It also covers copying the Oracle home to the standby server, modifying initialization parameters, and using RMAN to duplicate the primary database and recover it as a physical standby. Key steps include enabling archive logging on the primary, setting the log archive destination, and starting redo transport services to ship archived logs to the standby.
1) Oracle 10g introduces flashback query which allows users to query past states of data within a specified time period by accessing the undo logs.
2) Flashback table allows users to recover accidentally dropped tables from the recycle bin.
3) Rollback monitoring provides estimated time to complete long running transactions such as rollbacks.
OTN TOUR 2016 - DBA Commands and Concepts That Every Developer Should KnowAlex Zaballa
The document provides an overview of DBA commands and concepts that every developer should know. It includes sections on availability of Oracle Database 12c, parallel queries, row chaining and migration, explain plans, Oracle Flashback Query and Table, schema management, rollbacks, pending statistics, bulk processing vs row-by-row, Virtual Private Database, extended data types, SQL text expansion, identity columns, and virtual columns. The presentation aims to help developers better understand database administration tasks and functionality.
OTN TOUR 2016 - DBA Commands and Concepts That Every Developer Should KnowAlex Zaballa
This document contains a summary of an Oracle DBA presentation on DBA commands and concepts that every developer should know. The presentation covered topics such as parallel queries, row chaining, explain plans, flashback queries, pending statistics, bulk processing, virtual private databases, extended data types, identity columns, and online table redefinition. It provided examples and demonstrations of many of these commands and concepts.
1. The document discusses database administration topics like starting and stopping the database control, configuring database instances, parameter files, and using SQL*Plus to view parameters.
2. It provides examples of initializing parameter values both statically in parameter files and dynamically using ALTER SESSION and ALTER SYSTEM.
3. The document also covers starting up and shutting down database instances using options like NOMOUNT, MOUNT, OPEN, and SHUTDOWN.
This document summarizes the key aspects of configuring and using Oracle Dataguard for disaster recovery. It discusses setting up a physical standby database, monitoring the replication process, and utilizing the standby for tasks like reporting and testing. Switching the primary and standby roles is also covered.
The document provides steps for cloning an Oracle EBS R12 environment from a source (PROD) system to a target (TEST) system. Key steps include:
1. Backing up files and databases from the source including applications files, database parameter files, and database backups.
2. Copying the backed up files to the target system and modifying configuration files to point to the target system.
3. Restoring and recovering the database on the target system using RMAN and modifying datafile names.
4. Running scripts to clone the application tier files and configure the applications.
5. Performing post-clone tasks like dropping and recreating temp tablespaces and cleaning up configuration.
T3 is an optimized protocol used to transport data between WebLogic Server and other Java programs. WebLogic Server tracks each Java Virtual Machine (JVM) it connects to and creates a single T3 connection to carry all traffic for a JVM. For example, if a client accesses an enterprise bean and JDBC connection pool on WebLogic Server, a single network connection is established between the WebLogic Server JVM and the client JVM.
Install and upgrade Oracle grid infrastructure 12.1.0.2Biju Thomas
1) The document describes upgrading an Oracle Grid Infrastructure installation from 12.1.0.1 to 12.1.0.2. There were issues with the rootupgrade script, but moving the ASM SPFILE location resolved it.
2) Key steps included running the Grid Infrastructure 12.1.0.2 installation, applying the PSU patch 19791375, and verifying the services started up successfully from the new Oracle home.
3) Applying the latest OPatch version 12.1.0.1.5 prior to installing the PSU is also documented.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Introduction of Cybersecurity with OSS at Code Europe 2024
br_test_lossof-datafile_10g.doc
1. Step 1: Confirm Database Name and Identify Tablespace Name to be used for DR Test.
SQL> select INSTANCE_NAME, VERSION from v$instance;
INSTANCE_NAME VERSION
---------------- -----------------
opsdba 10.2.0.2.0
SQL> select name from v$tablespace;
NAME
------------------------------
SYSTEM
UNDOTBS1
SYSAUX
USERS
TEMP1
SQL> select file_name from dba_data_files;
FILE_NAME
----------------------------------------------------------------------
----------
/u02/ORACLE/opsdba/users01.dbf
/u02/ORACLE/opsdba/sysaux01.dbf
/u02/ORACLE/opsdba/undotbs01.dbf
/u02/ORACLE/opsdba/system01.dbf
/u02/ORACLE/opsdba/users05.dbf
/u02/ORACLE/opsdba/users02.dbf
/u02/ORACLE/opsdba/users03.dbf
/u02/ORACLE/opsdba/users06.dbf
/u02/ORACLE/opsdba/users07.dbf
/u02/ORACLE/opsdba/users04.dbf
10 rows selected.
Step 2: Create a new tablespace with 1 Datafile which will be used for recovery exercise.
SQL> create tablespace drtbs datafile '/u02/ORACLE/opsdba/drtbs1.dbf'
size 100M extent management local;
Tablespace created.
SQL> select name from v$tablespace;
NAME
------------------------------
SYSTEM
UNDOTBS1
SYSAUX
USERS
DRTBS
TEMP1
6 rows selected.
SQL> select file_name from dba_data_files where tablespace_name=
SCENARIO: Recovering Datafile for which no backup exist (In 10G Database Using RMAN).
3. channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u02/ORACLE/opsdba/system01.dbf
input datafile fno=00003 name=/u02/ORACLE/opsdba/sysaux01.dbf
input datafile fno=00002 name=/u02/ORACLE/opsdba/undotbs01.dbf
input datafile fno=00011 name=/u02/ORACLE/opsdba/drtbs1.dbf
input datafile fno=00004 name=/u02/ORACLE/opsdba/users01.dbf
input datafile fno=00005 name=/u02/ORACLE/opsdba/users02.dbf
input datafile fno=00006 name=/u02/ORACLE/opsdba/users03.dbf
input datafile fno=00007 name=/u02/ORACLE/opsdba/users05.dbf
input datafile fno=00010 name=/u02/ORACLE/opsdba/users04.dbf
input datafile fno=00008 name=/u02/ORACLE/opsdba/users06.dbf
input datafile fno=00009 name=/u02/ORACLE/opsdba/users07.dbf
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.148.1.1.
613089455 tag=TAG20070128T223735 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
Finished backup at 28-JAN-07
Starting backup at 28-JAN-07
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=2 recid=380 stamp=613089480
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.149.1.1.
613089480 tag=TAG20070128T223800 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 28-JAN-07
Starting Control File and SPFILE Autobackup at 28-JAN-07
piece handle=/opt/oracle/product10gpr2/dbs/c-1493612009-20070128-03
comment=NONE
Finished Control File and SPFILE Autobackup at 28-JAN-07
RMAN>exit
Step 4: Add a New Datafile to that Tablespace and verify that the new file is now a
member of that tablespace. Also switch few log files just for confirmation.
SQL> select file_name from dba_data_files where tablespace_name=
'DRTBS';
FILE_NAME
----------------------------------------------------------------------
----------
/u02/ORACLE/opsdba/drtbs1.dbf
SQL> alter tablespace drtbs add datafile '/u02/ORACLE/opsdba/drtbs2.
dbf' size 100m;
Tablespace altered.
SQL> select file_name from dba_data_files where tablespace_name=
4. 'DRTBS';
FILE_NAME
----------------------------------------------------------------------
----------
/u02/ORACLE/opsdba/drtbs1.dbf
/u02/ORACLE/opsdba/drtbs2.dbf
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
Step 5: Create a new Table in that tablespace and perform some DML operation. Also
after DML operations switch some logfile.
SQL> create table t1(col1 number(10)) tablespace DRTBS;
Table created.
SQL> insert into t1 values (&a);
Enter value for a: 1
old 1: insert into t1 values(&a)
new 1: insert into t1 values(1)
1 row created.
SQL> /
Enter value for a: 2
old 1: insert into t1 values(&a)
new 1: insert into t1 values(2)
1 row created.
SQL> /
Enter value for a: 3
old 1: insert into t1 values(&a)
new 1: insert into t1 values(3)
1 row created.
SQL> /
Enter value for a: 4
old 1: insert into t1 values(&a)
new 1: insert into t1 values(4)
1 row created.
SQL> commit;
Commit complete.
SQL> select * from t1;
COL1
----------
1
2
3
4
5. SQL> alter system switch logfile;
System altered.
Step 6: In the OS Level remove all files of that tablespace including the newly added one
(whose backup does not exist).
opsdba:/opt/oracle>cd /u02/ORACLE/opsdba/
opsdba:/u02/ORACLE/opsdba>ls –lrt drtbs*.dbf
total 1441496
-rw-r----- 1 oracle dba 104865792 Jan 28 22:38 drtbs1.dbf
-rw-r----- 1 oracle dba 104865792 Jan 28 23:08 drtbs2.dbf
opsdba:/u02/ORACLE/opsdba>rm -r drtbs*.dbf
opsdba:/u02/ORACLE/opsdba>ls -lrt drtbs*.dbf
ls: drtbs*.dbf: No such file or directory
opsdba:/u02/ORACLE/opsdba>
Step 7: Try to bring the tablespace offline and we will get error message as follows.
opsdba:/u02/ORACLE/opsdba>sql
SQL> alter tablespace drtbs offline;
alter tablespace drtbs offline
*
ERROR at line 1:
ORA-01116: error in opening database file 11
ORA-01110: data file 11: '/u02/ORACLE/opsdba/drtbs1.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Step 8: Now bring the Tablespace offline with IMMEDIATE option and confirm.
SQL> alter tablespace drtbs offline immediate;
Tablespace altered.
SQL> select TABLESPACE_NAME,STATUS from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
UNDOTBS1 ONLINE
SYSAUX ONLINE
USERS ONLINE
TEMP1 ONLINE
DRTBS OFFLINE
6 rows selected.
Step 9: Now connect to RMAN and Confirm that No backup exist for the Newly added
Datafile. Then try to restore the TABLESPACE and we will see that RMAN is
creating that newly added Datafile as a part of the restore
process. This is a new feature in 10G.
SQL> select file_id, file_name from dba_data_files where
tablespace_name='DRTBS';
FILE_ID
----------
6. FILE_NAME
----------------------------------------------------------------------
----------
11
/u02/ORACLE/opsdba/drtbs1.dbf
12
/u02/ORACLE/opsdba/drtbs2.dbf
SQL> exit;
opsdba:/u02/ORACLE/opsdba>rman target /
Recovery Manager: Release 10.2.0.2.0 - Production on Sun Jan 28 23:18:
09 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: OPSDBA (DBID=1493612009)
RMAN> list backup of datafile 11;
using target database control file instead of recovery catalog
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
129 Full 669.09M DISK 00:00:15 28-JAN-07
BP Key: 129 Status: AVAILABLE Compressed: NO Tag: TAG
20070128T223735
Piece Name: /opt/oracle/backup/opsdba/OPSDBA.20070128.148.1.1.
613089455
List of Datafiles in backup set 129
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
11 Full 2747296 28-JAN-07 /u02/ORACLE/opsdba/drtbs1.dbf
RMAN> list backup of datafile 12;
No output …
RMAN> restore tablespace drtbs;
Starting restore at 28-JAN-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=144 devtype=DISK
creating datafile fno=12 name=/u02/ORACLE/opsdba/drtbs2.dbf
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00011 to /u02/ORACLE/opsdba/drtbs1.dbf
channel ORA_DISK_1: reading from backup piece /opt/oracle/backup/
opsdba/OPSDBA.20070128.148.1.1.613089455
channel ORA_DISK_1: restored backup piece 1
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.148.1.1.
7. 613089455 tag=TAG20070128T223735
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
Finished restore at 28-JAN-07
opsdba:/u02/ORACLE/opsdba>rman target /
Step 10: Start Recovery of that Tablespace.
opsdba:/u02/ORACLE/opsdba>rman target /
Recovery Manager: Release 10.2.0.2.0 - Production on Sun Jan 28 23:49:33
2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: OPSDBA (DBID=1493612009)
RMAN> recover tablespace drtbs;
Starting recover at 28-JAN-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=154 devtype=DISK
starting media recovery
un Jan 28 23:22:36 2007
alter database recover logfile
'/u02/ORACLE/opsdba/arch/arch_1_2_613052894.dbf'
Sun Jan 28 23:22:36 2007
Media Recovery Log /u02/ORACLE/opsdba/arch/arch_1_2_613052894.dbf
Sun Jan 28 23:22:36 2007
Recovery of Online Redo Log: Thread 1 Group 3 Seq 3 Reading mem 0
Mem# 0 errs 0: /u02/ORACLE/opsdba/redo03.log
Sun Jan 28 23:22:36 2007
Recovery of Online Redo Log: Thread 1 Group 2 Seq 4 Reading mem 0
Mem# 0 errs 0: /u02/ORACLE/opsdba/redo02.log
Sun Jan 28 23:22:36 2007
Recovery of Online Redo Log: Thread 1 Group 1 Seq 5 Reading mem 0
Mem# 0 errs 0: /u02/ORACLE/opsdba/redo01.log
Sun Jan 28 23:22:36 2007
Media Recovery Complete (opsdba)
Completed: alter database recover logfile
'/u02/ORACLE/opsdba/arch/arch_1_2_613052894.dbf'
Sun Jan 28 23:22:52 2007
media recovery complete, elapsed time: 00:00:00
Finished recover at 28-JAN-07
RMAN> exit
Recovery Manager complete.
Step 11: Bring the Tablespace online and confirm .
SQL> alter tablespace drtbs online;
Tablespace altered.
SQL> select TABLESPACE_NAME,STATUS from dba_tablespaces;
8. TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
UNDOTBS1 ONLINE
SYSAUX ONLINE
USERS ONLINE
TEMP1 ONLINE
DRTBS ONLINE
6 rows selected.
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
SQL> select * from t1;
COL1
----------
1
2
3
4
SQL> select file_name from dba_data_files where tablespace_name=
'DRTBS';
FILE_NAME
----------------------------------------------------------------------
----------
/u02/ORACLE/opsdba/drtbs1.dbf
/u02/ORACLE/opsdba/drtbs2.dbf
SQL> exit
Step 12: As a standard practice immediately after the recovery please take a FULL
DATABASE BACKUP.
opsdba:/u02/ORACLE/opsdba>rman target /
Recovery Manager: Release 10.2.0.2.0 - Production on Sun Jan 28 23:25:
01 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
connected to target database: OPSDBA (DBID=1493612009)
RMAN> backup database plus archivelog;
Starting backup at 28-JAN-07
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=144 devtype=DISK
9. skipping archive log file /u02/ORACLE/opsdba/arch/arch_1_1_613052894.
dbf; already backed up 1 time(s)
skipping archive log file /u02/ORACLE/opsdba/arch/arch_1_2_613052894.
dbf; already backed up 1 time(s)
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=3 recid=381 stamp=613091353
input archive log thread=1 sequence=4 recid=382 stamp=613091355
input archive log thread=1 sequence=5 recid=383 stamp=613092208
input archive log thread=1 sequence=6 recid=384 stamp=613092210
input archive log thread=1 sequence=7 recid=385 stamp=613092318
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.154.1.1.
613092318 tag=TAG20070128T232518 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 28-JAN-07
Starting backup at 28-JAN-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u02/ORACLE/opsdba/system01.dbf
input datafile fno=00003 name=/u02/ORACLE/opsdba/sysaux01.dbf
input datafile fno=00002 name=/u02/ORACLE/opsdba/undotbs01.dbf
input datafile fno=00011 name=/u02/ORACLE/opsdba/drtbs1.dbf
input datafile fno=00012 name=/u02/ORACLE/opsdba/drtbs2.dbf
input datafile fno=00004 name=/u02/ORACLE/opsdba/users01.dbf
input datafile fno=00005 name=/u02/ORACLE/opsdba/users02.dbf
input datafile fno=00006 name=/u02/ORACLE/opsdba/users03.dbf
input datafile fno=00007 name=/u02/ORACLE/opsdba/users05.dbf
input datafile fno=00010 name=/u02/ORACLE/opsdba/users04.dbf
input datafile fno=00008 name=/u02/ORACLE/opsdba/users06.dbf
input datafile fno=00009 name=/u02/ORACLE/opsdba/users07.dbf
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.155.1.1.
613092320 tag=TAG20070128T232520 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
Finished backup at 28-JAN-07
Starting backup at 28-JAN-07
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=8 recid=386 stamp=613092345
channel ORA_DISK_1: starting piece 1 at 28-JAN-07
channel ORA_DISK_1: finished piece 1 at 28-JAN-07
piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070128.156.1.1.
613092346 tag=TAG20070128T232545 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 28-JAN-07
Starting Control File and SPFILE Autobackup at 28-JAN-07
piece handle=/opt/oracle/product10gpr2/dbs/c-1493612009-20070128-07
comment=NONE
Finished Control File and SPFILE Autobackup at 28-JAN-07
10. RMAN> list backup of datafile 11;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
129 Full 669.09M DISK 00:00:15 28-JAN-07
BP Key: 129 Status: AVAILABLE Compressed: NO Tag: TAG
20070128T223735
Piece Name: /opt/oracle/backup/opsdba/OPSDBA.20070128.148.1.1.
613089455
List of Datafiles in backup set 129
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
11 Full 2747296 28-JAN-07 /u02/ORACLE/opsdba/drtbs1.dbf
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
136 Full 669.73M DISK 00:00:21 28-JAN-07
BP Key: 136 Status: AVAILABLE Compressed: NO Tag: TAG
20070128T232520
Piece Name: /opt/oracle/backup/opsdba/OPSDBA.20070128.155.1.1.
613092320
List of Datafiles in backup set 136
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
11 Full 2748771 28-JAN-07 /u02/ORACLE/opsdba/drtbs1.dbf
RMAN> list backup of datafile 12;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
136 Full 669.73M DISK 00:00:21 28-JAN-07
BP Key: 136 Status: AVAILABLE Compressed: NO Tag: TAG
20070128T232520
Piece Name: /opt/oracle/backup/opsdba/OPSDBA.20070128.155.1.1.
613092320
List of Datafiles in backup set 136
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
12 Full 2748771 28-JAN-07 /u02/ORACLE/opsdba/drtbs2.dbf
RMAN>
______________________________________END _________________________________