T3 is an optimized protocol used to transport data between WebLogic Server and other Java programs. WebLogic Server tracks each Java Virtual Machine (JVM) it connects to and creates a single T3 connection to carry all traffic for a JVM. For example, if a client accesses an enterprise bean and JDBC connection pool on WebLogic Server, a single network connection is established between the WebLogic Server JVM and the client JVM.
Oracle 11g Installation With ASM and Data Guard SetupArun Sharma
In this article we will look at Oracle 11g installation with ASM storage and also setup physical standby on ASM.
We will be following below steps for our configuration:
Setup Primary Server
Setup Standby Server
Full article link is here: https://www.support.dbagenesis.com/post/oracle-11g-installation-with-asm-and-data-guard-setup
It enables very high-speed movement of data and metadata from one database to another means providing Disaster recovery, Complete automatic data protection, and High
availability. Servers working on Oracle 11g database with base OS Oracle Linux Enterprise. One is currently open and Up On which huge no. of transaction are going on and while other is in mount mode with Different geographical locations.
As automate the backups of archives & datafiles from server 1 to server 2, Whenever if there is problem in currently up server 1 or get destroyed in disaster or burnt out or require maintenance so to avoid failure of ongoing transactions decrease or decrease in hit ratio or to maintain no time lag or
server down problem, then with few commands to run the server 2 will automatically up and working as primary server So no loss of ongoing transactions and Data Loss.
12cR2 Single-Tenant: Multitenant Features for All EditionsFranck Pachot
Multitenant architecture is available even without Oracle's multitenant option. In this session take a look at the overhead and the 12.2 new features so that you can choose among single-tenant or non-container databases. These features include agility in data movement, easy flashback, and fast upgrade.
Oracle 11g Installation With ASM and Data Guard SetupArun Sharma
In this article we will look at Oracle 11g installation with ASM storage and also setup physical standby on ASM.
We will be following below steps for our configuration:
Setup Primary Server
Setup Standby Server
Full article link is here: https://www.support.dbagenesis.com/post/oracle-11g-installation-with-asm-and-data-guard-setup
It enables very high-speed movement of data and metadata from one database to another means providing Disaster recovery, Complete automatic data protection, and High
availability. Servers working on Oracle 11g database with base OS Oracle Linux Enterprise. One is currently open and Up On which huge no. of transaction are going on and while other is in mount mode with Different geographical locations.
As automate the backups of archives & datafiles from server 1 to server 2, Whenever if there is problem in currently up server 1 or get destroyed in disaster or burnt out or require maintenance so to avoid failure of ongoing transactions decrease or decrease in hit ratio or to maintain no time lag or
server down problem, then with few commands to run the server 2 will automatically up and working as primary server So no loss of ongoing transactions and Data Loss.
12cR2 Single-Tenant: Multitenant Features for All EditionsFranck Pachot
Multitenant architecture is available even without Oracle's multitenant option. In this session take a look at the overhead and the 12.2 new features so that you can choose among single-tenant or non-container databases. These features include agility in data movement, easy flashback, and fast upgrade.
Oracle ACFS is a general purpose portable cluster file system build on the enhanced ASM available since Oracle 11.2 and benefiting from the existing ASM mirroring and striping leading to performance and fault tolerance. In the presentation you will look at the concept and the architecture of ACFS. You will see how to use familiar tools such as sqlplus, asmca, asmcmd and OEM DC to configure, implement and manage ACFS. The presentation will provide an idea for deployment of ACFS for Snapshots and replications. Being fully integrated with ASM, ASM views can be used to monitor performance and usage.
Oracle ACFS is a general purpose portable cluster file system build on the enhanced ASM available since Oracle 11.2 and benefiting from the existing ASM mirroring and striping leading to performance and fault tolerance. In the presentation you will look at the concept and the architecture of ACFS. You will see how to use familiar tools such as sqlplus, asmca, asmcmd and OEM DC to configure, implement and manage ACFS. The presentation will provide an idea for deployment of ACFS for Snapshots and replications. Being fully integrated with ASM, ASM views can be used to monitor performance and usage.
Oracle Recovery Manager (Oracle RMAN) has evolved since being released in version 8i. With the newest version of Oracle 12c , RMAN has great new features that will allow you to reduce your down time in case of a disaster. In this session you will learn about the new features that were introduced in Oracle 12c and how can you take advantage of them from the first day you upgrade to this version.
A duplicate (clone or snapshot) database is useful for a variety of purposes, most of which involve testing &
upgrade
• You can perform the following tasks in a duplicate database:
• Test backup and recovery procedures
• Test an upgrade to a new release of Oracle Database
• Test the effect of applications on database performance
• Create a standby database (Dataguard) with DG Broker
• Leverage on Transient Logical Standby to perform an upgrade
• Generate reports
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
1. T3 is an optimized protocol used to transport data between WebLogic Server and
other Java programs,including clients and other WebLogic Servers. WebLogic
Server keeps track of every Java Virtual
Machine (JVM) with which it connects, and creates a single T3 connection to
carry all traffic for a JVM.
For example, if a Java client accesses an enterprise bean and a JDBC connection
pool on WebLogic Server, a single network connection is established between
the WebLogic Server JVM and the client JVM.
Oracle Support NOTE 1465038.1, “Calculating Usable Space in Exadata Cell“
[grid@dodpdb04 ~]$ asmcmd -p (shows present working directory)
ASMCMD [+] > lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 4096 4194304 36384768 15924908
3307706 6308601 0 N DATA_DODP/
MOUNTED NORMAL N 512 4096 4194304 2087680 303648
257400 23124 0 Y DBFS_DG/
MOUNTED NORMAL N 512 4096 4194304 9090816 6850084
826437 3011823 0 N RECO_DODP/
ASMCMD [+] > du DATA_DODP/
Used_MB Mirror_used_MB
10228996 20458476
So the disk redundancy is normal... in DATA_DODP
[grid@dodpdb04 ~]$ echo "36384768-20458476" |bc
15926292
which is pretty close with Free_MB
The free MB divided by 2 (normal redundancy)
[grid@dodpdb04 ~]$ echo "15926292/2"|bc
7963146
which close to Usable_file_MB , the space available new files
[grid@dodpdb04 ~]$ asmcmd du DATA_DODP/ ( asm commands non-interactive mode)
Used_MB Mirror_used_MB
10228996 20458476
ASMCMD is very slow. How can I speed it up?
The asmcmd utility appears to be very slow. This slowness is a result of queries
against the v$asm_diskgroup view. To solve this problem edit the
$ORACLE_HOME/bin/asmcmdcore script and change all v$asm_diskgroup references to
v$asm_diskgroup_stat.
V$asm_diskgroup and v$asm_diskgroup_stat provides exactly the same information,
but the %_stat view operates from cache, while v$asm_diskgroup rescans all disk
headers. This method is also used by Oracle in their Enterprise Manager product.
What is SYSASM role?
Starting from Oracle 11g, SYSASM role can be used to administer the ASM
instances. You can continue using SYSDBA role to connect to ASM but it will
generate following warning messages at time of startup/shutdown, create
Diskgroup/add disk, etc
Alert entry
WARNING: Deprecated privilege SYSDBA for command 'STARTUP'
2. How can we copy the files from/to ASM?
You can use RMAN or DBMS_FILE_TRANSFER.COPY_FILE procedure to copy the files
to/from ASM from/to Filesystem.
Same size disk for Failuregroups in Normal/High redundancy will prevent issues
like ORA-15041 as the file extents needs to be mirrored across the disks
I have created Oracle database using DBCA and having a different home for ASM
and Oracle Database. I see that listener is running from ASM_HOME. Is it
correct?
This is fine. When using different home for ASM, you need to run the listener
from ASM_HOME instead of ORACLE_HOME.
How does one create a database directly on ASM?
The trick is to create an SPFILE and restart the instance before issuing the
CREATE DATABASE statement:
Code:
STARTUP NOMOUNT PFILE=initorcl_0.ora
CREATE SPFILE FROM pfile='initorcl_0.ora';
SHUTDOWN IMMEDIATE
STARTUP NOMOUNT
Point all OMF files into ASM:
Code:
ALTER SYSTEM SET db_create_file_dest = '+DATA';
ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE = 134G;
ALTER SYSTEM SET db_recovery_file_dest = '+RECOVER';
Issue the create database command:
Code:
CREATE DATABASE orcl
UNDO TABLESPACE undots
DEFAULT TEMPORARY TABLESPACE temp
character set "WE8ISO8859P1"
national character set "AL16UTF16";
ASM Instance not recognized by DBCA ,why?
I had the same error and the way I fixed is by changing TNS_ADMIN value in
.bash_profile.
Old Value
TNS_ADMIN=$ORACLE_HOME/network/admin
New Value
TNS_ADMIN=$ASM_HOME/network/admin
My ORACLE_HOME=/d01/app/oracle/product/10.2.0/db_1
My ASM_HOME=/d01/app/oracle/product/10.2.0/*asm*
Once this is done, I re-executed my .bash_profile and started dbca and the error
went away.
Also, after you create your databse, you might have to set your TNS_ADMIN env in
srvctl to start database . Wrong TNS_ADMIN setting might cause the following
3. errors during the start of database using srvctl but not sqlplus.
$ srvctl start database -d ProdDb
PRKP-1001 : Error starting instance ProdDb on node node1
CRS-0215: Could not start resource 'ora.ProdDb.ProdDb1.inst'.
PRKP-1001 : Error starting instance ProdDb2 on node node2
CRS-0215: Could not start resource 'ora.ProdDb.ProdDb2.inst'.
For example:
srvctl setenv database -d ProdDb -T
TNS_ADMIN='/d01/app/oracle/product/10.2.0/asm/network/admin'
Which shows you that all you need is an +ASM entry in the tnsnames.ora file
under /d01/app/oracle/product/10.2.0/db_1/network/admin.
Bigfile vs standard db files
The performance of database opens, checkpoints, and DBWR processes should
improve if data is stored in bigfile tablespaces instead of traditional
tablespaces. However, increasing the datafile size might increase the time to
restore a corrupted file or create a new datafile.
in oracle 11g release 2, instances register with SCAN listeners only as remote
listeners. In your case it should be
REMOTE_LISTENER=<scan-name>:<port>
That is the purpose of REMOTE_LISTENER
Explaination about each parameter in Data Guard
1.LOG_ARCHIVE_CONFIG =
{
[ SEND | NOSEND ]
[ RECEIVE | NORECEIVE ]
[ DG_CONFIG=(remote_db_unique_name1 [, ... remote_db_unique_name9) |
NODG_CONFIG ]
SEND -log files
RECEIVE =log file
database that are part of data guard configuration
2.Do not use the default value, VALID_FOR=(ALL LOGFILES, ALL_ROLES), for
logical standby databases.
log_archive_dest_state_2=DEFER/ENABLE;
3.fal_server and fal_client are used for smooth switch over
4.standby configuration should have one more standby redo log file group than
the number of online redo log file groups on the primary database.
Because,
Logical standby databases may require more standby redo log files (or
additional ARCn processes) depending on the workload. This is because logical
standby databases also write to online redo log files, which take precedence
over standby redo log files. Thus, the standby redo log files may not be
archived as quickly as the online redo log files.
Logical standby databases may require more standby redo log files (or
4. additional ARCn processes) depending on the workload. This is because logical
standby databases also write to online redo log files, which take precedence
over standby redo log files. Thus, the standby redo log files may not be
archived as quickly as the online redo log files.
v$standby_log
select member from v$logfile where type='STANDBY';
RMAN>backup current controlfile for standby;
5.cp orapwprimary orapwstandby
Rename the password file wrt to SID in standby
chown -R oracle:oinstall /u01/app will never equals to chown -R oracle:oinstall
/u01
scope=both is not valid when database is nomount
Error: ORA-12528: TNS:listener: all appropriate instances are blocking new
connections
Reason : DB is currently starting up or not available
We should have all the archive log files from the backup onwards
dorecover clause in the duplicate database means ,
we have taken a backup at primary as backup database plus archivelog;
so oracle will restore the datafiles from backupset and recover using
archivelog that was backed up.
(Anyway once MRP starts it do-recover)
Backing Up Logs with BACKUP ... PLUS ARCHIVELOG
You can add archived redo logs to a backup of other files by using the
BACKUP ... PLUS ARCHIVELOG clause. Adding BACKUP ... PLUS ARCHIVELOG causes RMAN
to do the following:
Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.
Runs BACKUP ARCHIVELOG ALL. Note that if backup optimization is enabled, then
RMAN skips logs that it has already backed up to the specified device.
Backs up the rest of the files specified in BACKUP command. ( if it is a
database ,then complete database)
Runs the ALTER SYSTEM ARCHIVE LOG CURRENT command.
Backs up any remaining archived logs generated during the backup.
Why we need a standby control file backup
Standby controlfile set a flag with that DB identify the DB in standby or Normal
Database.
The controlfile Must be created after the backup of the datafiles.
restore standby controlfile from 'C:ORACLECONTROL08N3GEA6_1_1.BKP';
Even if the above control file is creating using current controlfile.
5. If you have ILOM, you can view the progress of restart through its interface.
ssh
/SP/start console
ASMDeactivationOutcome??????
This attribute shows whether the grid disk can be deactivated without loss of
data. A value of “Yes“ indicates you can deactivate this grid disk without data
loss.
CellCLI> list griddisk attributes name, ASMDeactivationOutcome, ASMModeStatus
crsctl status resource -t??? is it same as crs_stat -t
Hash join (small table ,large table) is also called as bloom filter.
Exadata is kind of "different Oracle", for example it doesn't require indexing
on table that much and so and so.
An exdata DBA need experience in Database, Storage and OS.
A good understanding of SmartScan and Hybrid Columnar Compression. (ctrieb)
If you can answer the question "Why with Exadata do I probably not need [some or
all of my] indexes?" you are on the right road. (Dan Morgan).
Learn all you can about direct path reads as they are critical to enable smart
scan. Serial direct path reads are done often in 11gR2, probably because of
Exadata influence.
Learn about parallel query and the mechanisms available to throttle it (queuing
is now available in 11gR2).
Knowing something about Infiniband would probably be a good idea as well since
you'll have to figure out how to connect to external devices (tape drives for
example).
Look at the cssd.log files on both nodes; usually we will get more
information on the second node if the first node is evicted. Also take a look at
crsd.log file too
2. The evicted node will have core dump file generated and system reboot
info.
3. Find out if there was node reboot , is it because of CRS or others, check
system reboot time
4. If you see “Polling“ key words with reduce in percentage values in
cssd.log file that says the eviction is probably due to Network. If you see
“Diskpingout“ are something related to -DISK- then, the eviction is because of
Disk time out.
Storage access and private interconnect use different connectivity
diagcollection.pl ,
Diagnostic information about CRS in $CRS_HOME
crsctl debug trace
{css|crs|evm} enables tracing for the given process
6. crs_start
crs_start is used to start resources, either one at a time or for the entire
cluster.
To start all resources across a cluster, the crs_start command can be used with
the -all option:
$ crs_start -all
crsctl stop all