This document outlines the steps to install Oracle Grid Infrastructure and configure an Oracle Real Application Clusters (RAC) database with iSCSI high availability on two nodes. It describes pre-requisite tasks like setting up repositories, installing Oracle Grid and database packages, configuring users, directories and environment variables. Specific steps covered include bonding network interfaces, configuring the hosts file, setting swap space and installing Oracle Grid software.
[Open Infrastructure & Cloud Native Days Korea 2019]
커뮤니티 버전의 OpenStack 과 Ceph를 활용하여 대고객서비스를 구축한 사례를 공유합니다. 유연성을 확보한 기업용 클라우드 서비스 구축 사례와 높은 수준의 보안을 요구하는 거래소 서비스를 구축, 운영한 사례를 소개합니다. 또한 이 프로젝트에 사용된 기술 스택 및 장애 해결사례와 최적화 방안을 소개합니다. 오픈스택은 역시 오픈소스컨설팅입니다.
#openstack #ceph #openinfraday #cloudnative #opensourceconsulting
Container security: seccomp, network e namespacesKiratech
Le slides hanno l'obiettivo di evidenziare le nuove features di sicurezza introdotte nell'ultima release docker sia descrivendone il funzionamento sia mostrando, attraverso alcune demo, l'eventuale impatto in ambienti di produzione. Viene fatta una comparazione, in termini di analisi del rischio, tra ambienti host utilizzanti engine inferiore a release 1.9 e nuove versioni, soffermandosi su mancanze e future implementazioni.
[Open Infrastructure & Cloud Native Days Korea 2019]
커뮤니티 버전의 OpenStack 과 Ceph를 활용하여 대고객서비스를 구축한 사례를 공유합니다. 유연성을 확보한 기업용 클라우드 서비스 구축 사례와 높은 수준의 보안을 요구하는 거래소 서비스를 구축, 운영한 사례를 소개합니다. 또한 이 프로젝트에 사용된 기술 스택 및 장애 해결사례와 최적화 방안을 소개합니다. 오픈스택은 역시 오픈소스컨설팅입니다.
#openstack #ceph #openinfraday #cloudnative #opensourceconsulting
Container security: seccomp, network e namespacesKiratech
Le slides hanno l'obiettivo di evidenziare le nuove features di sicurezza introdotte nell'ultima release docker sia descrivendone il funzionamento sia mostrando, attraverso alcune demo, l'eventuale impatto in ambienti di produzione. Viene fatta una comparazione, in termini di analisi del rischio, tra ambienti host utilizzanti engine inferiore a release 1.9 e nuove versioni, soffermandosi su mancanze e future implementazioni.
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...NETWAYS
It gives an introduction to the architecture of Bareos, and how the components of Bareos interact. The configuration of Bareos will be discussed and the main Bareos features will be shown. As a practical part of the workshop the adaption of the preconfigured standard backup scheme to the attendees’ wishes will be developed.
Attendees are kindly asked to contribute configuration tasks that they want to have solved.
Introduction to Stacki at Atlanta Meetup February 2016StackIQ
An introduction to Stacki-the fastest bare metal Linux server provisioning tool from the Stacki Atlanta kickoff meetup on 2/23/16 at the Microsoft Innovation Center. Greg Bruno is the VP Engineering at StackIQ.
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
This session describes how to get the most out of OpenStack Cinder volumes on Ceph.
We’ll discuss:
Performance configuration, tuning, and workloads.
Performance test results of Red Hat Enterprise Linux OpenStack Platform 5, Red Hat Enterprise Linux OpenStack Platform 6, Red Hat Ceph Storage 1.2.3, and Firefly.
Anticipated improvements in performance for Red Hat Ceph Storage 1.3.
Salesforce at Stacki Atlanta Meetup February 2016StackIQ
Dave Peterson's presentation on how Salesforce uses Stacki and Chef to provision and manage thousands of servers. Stacki Atlanta kickoff Meetup on 2/23/16 at the Microsoft Innovation Center. Dave is a Lead Systems Engineer at Salesforce.
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...Ontico
Запускаем сервер (БД, Web-сервер или что-то свое собственное) и не получаем желаемый RPS. Запускаем top и видим, что 100% выедается CPU. Что дальше, на что расходуется процессорное время? Можно ли подкрутить какие-то ручки, чтобы улучшить производительность? А если параметр CPU не высокий, то куда смотреть дальше?
Мы рассмотрим несколько сценариев проблем производительности, рассмотрим доступные инструменты анализа производительности и разберемся в методологии оптимизации производительности Linux, ответим на вопрос за какие ручки и как крутить.
Cassandra Summit 2014: Lesser Known Features of Cassandra 2.1DataStax Academy
Presenter: Aaron Morton, Apache Cassandra Committer & Co-Founder of The Last Pickle
Apache Cassandra 2.0 and 2.1 include a wealth of new and updated features. Some are well known, others are known to only a few. But any of them could help you reduce latency, improve throughput, or make operations easier. This talk will take a deep dive into features that improve: Compaction, Write Performance, Memory Management, CQL 3, TTL and Tombstones, & Repair. Existing and new users will benefit from this wide ranging view of the features Apache Cassandra offers.
PuppetConf 2016: The Challenges with Container Configuration – David Lutterko...Puppet
Here are the slides from David Lutterkort's PuppetConf 2016 presentation called The Challenges with Container Configuration. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...NETWAYS
It gives an introduction to the architecture of Bareos, and how the components of Bareos interact. The configuration of Bareos will be discussed and the main Bareos features will be shown. As a practical part of the workshop the adaption of the preconfigured standard backup scheme to the attendees’ wishes will be developed.
Attendees are kindly asked to contribute configuration tasks that they want to have solved.
Introduction to Stacki at Atlanta Meetup February 2016StackIQ
An introduction to Stacki-the fastest bare metal Linux server provisioning tool from the Stacki Atlanta kickoff meetup on 2/23/16 at the Microsoft Innovation Center. Greg Bruno is the VP Engineering at StackIQ.
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
This session describes how to get the most out of OpenStack Cinder volumes on Ceph.
We’ll discuss:
Performance configuration, tuning, and workloads.
Performance test results of Red Hat Enterprise Linux OpenStack Platform 5, Red Hat Enterprise Linux OpenStack Platform 6, Red Hat Ceph Storage 1.2.3, and Firefly.
Anticipated improvements in performance for Red Hat Ceph Storage 1.3.
Salesforce at Stacki Atlanta Meetup February 2016StackIQ
Dave Peterson's presentation on how Salesforce uses Stacki and Chef to provision and manage thousands of servers. Stacki Atlanta kickoff Meetup on 2/23/16 at the Microsoft Innovation Center. Dave is a Lead Systems Engineer at Salesforce.
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...Ontico
Запускаем сервер (БД, Web-сервер или что-то свое собственное) и не получаем желаемый RPS. Запускаем top и видим, что 100% выедается CPU. Что дальше, на что расходуется процессорное время? Можно ли подкрутить какие-то ручки, чтобы улучшить производительность? А если параметр CPU не высокий, то куда смотреть дальше?
Мы рассмотрим несколько сценариев проблем производительности, рассмотрим доступные инструменты анализа производительности и разберемся в методологии оптимизации производительности Linux, ответим на вопрос за какие ручки и как крутить.
Cassandra Summit 2014: Lesser Known Features of Cassandra 2.1DataStax Academy
Presenter: Aaron Morton, Apache Cassandra Committer & Co-Founder of The Last Pickle
Apache Cassandra 2.0 and 2.1 include a wealth of new and updated features. Some are well known, others are known to only a few. But any of them could help you reduce latency, improve throughput, or make operations easier. This talk will take a deep dive into features that improve: Compaction, Write Performance, Memory Management, CQL 3, TTL and Tombstones, & Repair. Existing and new users will benefit from this wide ranging view of the features Apache Cassandra offers.
PuppetConf 2016: The Challenges with Container Configuration – David Lutterko...Puppet
Here are the slides from David Lutterkort's PuppetConf 2016 presentation called The Challenges with Container Configuration. Watch the videos at https://www.youtube.com/playlist?list=PLV86BgbREluVjwwt-9UL8u2Uy8xnzpIqa
Introducing containers into your infrastructure brings new capabilities, but also new challenges, in particular around configuration. This talk will take a look under the hood at some of those operational challenges including:
* The difference between runtime and build-time configuration, and the importance of relating the two together.
* Configuration drift, immutable mental models and mutable container file systems.
* Who configures the orchestrators?
* Emergent vs. model driven configuration.
In the process we will identify some common problems and talk about potential solutions.
Talk from PuppetConf 2016
Setup oracle golden gate 11g replicationKanwar Batra
How to setup Oracle Goldengate Replication between 11gR2 RAC or Single node instances. For RAC setup the GoldenGate custom cluster service . Not part of this document
RAC-Installing your First Cluster and DatabaseNikhil Kumar
RAC - Installing your First RAC
Abstract : Oracle Real Application Clusters have been one of the hottest technologies in the market since 2001 prior this is know OPS in 8i. Oracle has brought revolution in the field of database by enhancing RAC technologies in it each version. This presentation will give introduction of RAC and features introduced in each version of RAC. This presentation contains the demo of building Oracle clusterware from the scratch. Also we will discuss the new components and its features during installation. This presentation and demo will be done on version 11GR2. Which will be used as a base for our next presentation Viz. Upgradation of RAC 11GR2 to 12C RAC.
This presentation will give brief insight information of RAC infrastructure setup. Sometimes DBA doesn’t fully aware of prerequisite and verification steps that needs to perform before installing clusterware, So this session will cover thing to consider before installing clusterware and best practices followed during the whole process.
Agenda
Introduction of RAC
Installation of Clusterware.
Creating diskgroup / Adding disk to Diskgroup using ASMCA.
Creation of ACFS Volume.
Installation of RAC Database using DBCA.
This tutorial will guide you through the many considerations when deploying a sharded cluster. We will cover the services that make up a sharded cluster, configuration recommendations for these services, shard key selection, use cases, and how data is managed within a sharded cluster. Maintaining a sharded cluster also has its challenges. We will review these challenges and how you can prevent them with proper design or ways to resolve them if they exist today. There will be lab sessions at the end of some chapters so please have your laptops with you.
ERP System Implementation Kubernetes Cluster with Sticky Sessions Chanaka Lasantha
ERP System Implementation on Kubernetes Cluster with Sticky Sessions:
01. Security Features Enabled in Kubernetes Cluster.
02. SNMP, Syslog and audit logs enabled.
03. Enabled ERP no login service user.
04. Auto-scaling enabled both ESB and Jboss Pods.
05. Reduced power consumption using the scale in future during off-peak days.
06. NFS enables s usual with ERP service user.
07. External Ingress( Load Balance enabled).
08. Cluster load balancer enabled by default.
09. SSH enabled via both putty.exe and Kubernetes management console.
10. Network Monitoring enabled on Kubernetes dashboard.
11. Isolated Private and external network ranges to protect backend servers (pods).
12. OS of the pos is updated with the latest kernel version.
13. Core Linux OS will reduce security threats.
14. Lightweight OS over small HDD space
15. Less amount of RAM usage has been enabled.
16. AWS ready.
17. Possible for exporting into Public cloud ENV.
18. L7 and L4 Heavy Load Balancing Enabled.
19. Snapshot Versioning Control Enabled.
20. Many More ………etc.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Essentials of Automations: Optimizing FME Workflows with Parameters
Oracle cluster installation with grid and iscsi
1. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 1
ORACLE CLUSTER INSTALLTION WITH GRID & iSCSI HIGH
AVAILABILITY – 12C RAC
2. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 2
SETTING UP PRE-REQUIREMENTS
Date:
date -s "9 AUG 2013 11:32:08"
SETTING UP EPEL REPOSITORY ON ALL THE SERVERS
yum install http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -y
INSTALLING ORACLE ASMLIB PACKAGE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
cd /etc/yum.repos.d ; wget https://public-yum.oracle.com/public-yum-ol6.repo --no-check-certificate
wget http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
yum install kernel-uek-devel* kernel-devel oracleasm oracleasm-support elfutils-libelf-devel kmod-oracleasm
oracleasmlib tcpdump htop -y
yum install oracleasmlib-2.0.12-1.el6.x86_64.rpm
INSTALLING ORACLE GRID AND DATABASE PRE-REQUIREMENTS ON BOTH NODES/RACS -
(192.168.0.139 & 192.168.0.140)
yum install binutils-2.* elfutils-libelf-0.* glibc-2.* glibc-common-2.* ksh-2* libaio-0.* libgcc-4.* libstdc++-4.*
make-3.* elfutils-libelf-devel-* gcc-4.* gcc-c++-4.* glibc-devel-2.* glibc-headers-2.* libstdc++-devel-4.*
unixODBC-2.* compat-libstdc++-33* libaio-devel-0.* unixODBC-devel-2.* sysstat-7.* -y
INSTALLING BIND PRE-REQUIREMENTS ON DNS SERVER - (192.168.0.110)
yum -y install bind bind-utils
INSTALLING NFS SERVER PRE-REQUIREMENTS (10.75.40.31 & 10.75.40.32)
yum -y install nfs-utils
3. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 3
TO OVERCOME ORA-00845: MEMORY_TARGET NOT SUPPORTED ON BOTH NODES/RACS -
(192.168.0.139 & 192.168.0.140)
SQL> startup nomount;
ORA-00845: MEMORY_TARGET not supported on this system
This error comes up because you tried to use the Automatic Memory Management (AMM) feature of Oracle 12C.
It seems that your shared memory filesystem (shmfs) is not big enough and enlarge your shared memory filesystem
to avoid the error above.
First of all, login as root and have a look at the filesystem:
df -hT
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_oracleem-lv_root
93G 19G 69G 22% /
tmpfs 5.9G 112K 5.9G 1% /dev/shm
/dev/sda1 485M 99M 362M 22% /boot
We can see that tmpfs has a size of 6GB. We can change the size of that filesystem by issuing the following
command (where “12g” is the size I want for my MEMORY_TARGET):
mount -t tmpfs shmfs -o size=12g /dev/shm
The shared memory file system should be big enough to accommodate the MEMORY_TARGET and
MEMORY_MAX_TARGET values, or Oracle will throw the ORA-00845 error. Note that when changing
something with the mount command, the changes are not permanent.
To make the change persistent, edit your /etc/fstab file to include the option you specified above:
tmpfs /dev/shm tmpfs size=12g 0 0
SQL> startup nomount
ORACLE instance started.
Total System Global Area 1.1758E+10 bytes
Fixed Size 2239056 bytes
Variable Size 5939135920 bytes
Database Buffers 5804916736 bytes
Redo Buffers 12128256 bytes
ADDING SWAP SPACE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
dd if=/dev/zero of=/root/newswapfile bs=1M count=8198
chmod +x /root/newswapfile
mkswap /root/newswapfile
swapon /root/newswapfile
To make the change persistent, edit your /etc/fstab file to include the option you specified above:
4. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 4
vim /etc/fstab
/root/newswapfile swap swap defaults 0 0
Verify:
swapon -s
free -k
EDIT “/ETC/SYSCONFIG/NETWORK” AS ROOT USER ON BOTH NODES/RACS - (192.168.0.139 &
192.168.0.140)
NETWORKING=yes
HOSTNAME=kkcodb01
# Recommended value for NOZEROCONF
NOZEROCONF=yes
hostname kkcodb01
NETWORKING=yes
HOSTNAME=kkcodb02
# Recommended value for NOZEROCONF
NOZEROCONF=yes
hostname kkcodb02
UPDATE /ETC/HOSTS FILE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
Make sure that hosts file has right entries (remove or comment out lines with ipv6), make sure there is correct IP and
hostname, edit /etc/hosts as root:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
#public
192.168.0.139 kkcodb01 kkcodb01.example.com
192.168.0.140 kkcodb02 kkcodb02.example.com
#vip
192.168.0.143 kkcodb01-vip kkcodb01-vip.example.com
192.168.0.144 kkcodb02-vip kkcodb02-vip.example.com
#scan vip
#192.168.0.145 kkcodb-scan kkcodb-scan.example.com
#192.168.0.146 kkcodb-scan kkcodb-scan.example.com
#192.168.0.147 kkcodb-scan kkcodb-scan.example.com
#192.168.0.148 kkcodb-scan kkcodb-scan.example.com
#priv
10.75.40.143 kkcodb01-priv1 kkcodb01-priv1.example.com
10.75.40.144 kkcodb02-priv1 kkcodb02-priv2.example.com
5. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 5
BOND / TEAM MULTIPLE NETWORK INTERFACES (NIC) INTO A SINGLE INTERFACE ON BOTH
NODES/RACS - (192.168.0.139 & 192.168.0.140)
The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical
“bonded” interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes
provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed.
Modify eth0, eth1, eth3…. up to ethx config files to bond with bond0 & bond1
Create a bond0 Configuration File
vim /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.0.0
NETMASK=255.255.255.0
IPADDR=192.168.0.139
USERCTL=no
PEERDNS=no
BONDING_OPTS="mode=1 miimon=100"
vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
HWADDR=
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
Create a bond1 Configuration File
6. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 6
vim /etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE=bond1
BOOTPROTO=none
ONBOOT=yes
NETWORK=10.75.40.0
NETMASK=255.255.255.0
IPADDR=10.75.40.143
USERCTL=no
PEERDNS=no
BONDING_OPTS="mode=1 miimon=100"
vim /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
HWADDR=
TYPE=Ethernet
MASTER=bond1
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
vim /etc/sysconfig/network-scripts/ifcfg-eth3
DEVICE=eth3
HWADDR=
TYPE=Ethernet
MASTER=bond1
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
vim /etc/modprobe.conf
alias bond0 bonding
alias bond1 bonding
CREATE USER AND GROUPS FOR ORACLE DATABASE AND GRID ON BOTH NODES/RACS -
(192.168.0.139 & 192.168.0.140)
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g dba -G oinstall grid
useradd -u 1300 -g dba -G oinstall oracle
passwd grid
passwd oracle
7. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 7
mkdir -p /app/oracle
mkdir -p /app/12.1.0/grid
chown grid:dba /app
chown grid:dba /app/oracle
chown grid:dba /app/12.1.0
chown grid:dba /app/12.1.0/grid
chmod -R 775 /app
mkdir -p /u01 ; mkdir -p /u02 ; mkdir -p /u03
(Giving R/W/E permission for grid user in dba gruop)
chown grid:dba /u01
chown grid:dba /u02
chown grid:dba /u03
chmod +x /u01
chmod +x /u02
chmod +x /u03
or
(Giving R/W/E permission for gird/oracle - all users in dba gruop)
chgrp dba /u01
chgrp dba /u02
chgrp dba /u03
chmod g+swr /u01
chmod g+swr /u02
chmod g+swr /u03
SETTING UP ENVIRONMENT VARIABLES FOR OS ACCOUNTS: GRID AND ORACLE ON BOTH
NODES/RACS - (192.168.0.139 & 192.168.0.140)
@ the kkcodb01 as the gird user
su – grid
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
8. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 8
ORACLE_HOSTNAME=kkcodb01; export ORACLE_HOSTNAME
ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$GRID_HOME; export ORACLE_HOME
ORACLE_SID=RAC1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
@ the kkcodb02 as the gird user
su – grid
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=kkcodb02; export ORACLE_HOSTNAME
ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$GRID_HOME; export ORACLE_HOME
ORACLE_SID=RAC2; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
9. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 9
@ the kkcodb01 as the oracle user
su – oracle
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=kkcodb01; export ORACLE_HOSTNAME
ORACLE_UNQNAME=oradb; export ORACLE_UNQNAME
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
ORACLE_SID=oradb1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
@ the kkcodb02 as the oracle user
su – oracle
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=kkcodb02; export ORACLE_HOSTNAME
ORACLE_UNQNAME=oradb; export ORACLE_UNQNAME
11. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 11
cat >> /etc/profile <<EOF
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF
cat >> /etc/csh.login <<EOF
if ( $USER == "oracle" || $USER == "grid" )
then
limit maxproc 16384
limit descriptors 65536
endif
EOF
Execute the shutdown -r now on both nodes
DOWNLOADING ORACLE DATABASE AND GRID INFRASTRUCTURE SOFTWARE
You would have to download Oracle Database 12c Release 1 Grid Infrastructure (12.1.0.2.0) for Linux x86-64 –
here
Download – linuxamd64_12102_grid_1of2.zip
Download – linuxamd64_12102_grid_2of2.zip
Downloading and installing Oracle Database software
You would have to download Oracle Database 12c Release (12.1.0.2.0) for Linux x86-64 – here
Download – linuxamd64_12102_database_1of2.zip
Download – linuxamd64_12102_database_2of2.zip
Copy zip files to kkcodb01 server to /tmp directory using WinSCP
As a root user,
cd /tmp
chmod +x *.zip
for i in /tmp/linuxamd64_12102_grid_*.zip; do unzip $i -d /home/grid/stage; done
for i in /tmp/linuxamd64_12102_database_*.zip; do unzip $i -d /home/oracle/stage; done
12. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 12
INSTALL BIND TO CONFIGURE DNS SERVER ON 192.168.0.138 WHICH RESOLVES DOMAIN NAME
OR IP ADDRESS.
yum -y install bind bind-utils
Configure BIND.
vim /etc/named.conf
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
acl "trusted" {
192.168.0.0/24;
10.75.40.0/24;
};
options {
listen-on port 53 { 127.0.0.1; 192.168.0.0/24; 10.75.40.0/24;};
#listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-transfer { any; };
allow-query { localhost; trusted; };
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
13. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 13
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
include "/etc/named/named.conf.local";
vim /etc/named/named.conf.local
zone "example.com" {
type master;
file "/etc/named/zones/db.example.com"; # zone file path
};
zone "0.192.in-addr.arpa" {
type master;
file "/etc/named/zones/db.192.0"; # 192.168.0.0/16
};
vim /etc/named/zones/db.example.com
$TTL 604800
@ IN SOA ns1.example.com. root.example.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers - NS records
IN NS ns1.example.com.
; name servers - A records
ns1.example.com. IN A 192.168.0.139
; A records
kkcodb-scan IN A 192.168.0.145
kkcodb-scan IN A 192.168.0.146
kkcodb-scan IN A 192.168.0.147
kkcodb-scan IN A 192.168.0.148
;
kkcodb01-priv1 IN A 10.75.40.143
kkcodb02-priv1 IN A 10.75.40.144
;
kkcodb01 IN A 192.168.0.139
kkcodb02 IN A 192.168.0.140
;
nfs IN A 192.168.0.30
nfs-active IN A 10.75.40.31
nfs-pasive IN A 10.75.40.32
14. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 14
vim /etc/named/zones/db.192.0
$TTL 604800
@ IN SOA ns1.example.com. root.example.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers - NS records
IN NS ns1.example.com.
; PTR Records
139.0 IN PTR ns1.example.com. ; 192.168.0.139
;
145.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.145
146.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.146
147.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.147
148.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.148
;
143.40 IN PTR kkcodb01-priv1.example.com. ; 10.75.40.143
144.40 IN PTR kkcodb02-priv2.example.com. ; 10.75.40.144
;
139.0 IN PTR kkcodb01.example.com. ; 192.168.0.139
140.0 IN PTR kkcodb02.example.com. ; 192.168.0.140
;
30.0 IN PTR nfs.example.com. ; 192.168.0.30
31.40 IN PTR nfs-active.example.com. ; 10.75.40.31
32.40 IN PTR nfs-pasive.example.com. ; 10.75.40.32
chkconfig named on
service named restart
named-checkzone 0.192.in-addr.arpa /etc/named/zones/db.192.0
@ ALL OF THEM ARE SHOULD BE CONFIGURING DNS CLIENT SETTING AS FOLLOWS
(INCLUDING BIND SERVER ALSO),
vim /etc/sysconfig/networking/profiles/default/resolv.conf
nameserver 192.168.0.138
search example.com
vim /etc/resolv.conf
nameserver 192.168.0.138
search example.com
service network restart
15. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 15
chkconfig NetworkManager off
service network restart
cat /etc/resolv.conf
ID DEVICE eth0, eth1…ethx DOES NOT SEEM TO BE PRESENT:
I was able to fix the problem by deleting the /etc/udev/rules.d/70-persistant-net.rules file
and restarting the virtual machine which generated a new file and got everything set up correctly.
Remove .ssh directory form individual users and restart the servers.
LOGICAL VOLUME MANAGEMENT – LVM ON NFS & NFS BACKUP KEEPER SERVERS
LVM is a logical volume manager for the Linux kernel that manages disk drives and similar mass-storage devices.
Heinz Mauelshagen wrote the original code in 1998, taking its primary design guidelines from the HP-UX's volume
manager.
The installers for the CrunchBang, CentOS, Debian, Fedora, Gentoo, Mandriva, MontaVista Linux, openSUSE,
Pardus, Red Hat Enterprise Linux, Slackware, SLED, SLES, Linux Mint, Kali Linux, and Ubuntu distributions are
LVM-aware and can install a bootable system with a root filesystem on a logical volume.
LVM IS COMMONLY USED FOR THE FOLLOWING PURPOSES:
1. Managing large hard disk farms by allowing disks to be added and replaced without downtime or service
disruption, in combination with hot swapping.
2. On small systems (like a desktop at home), instead of having to estimate at installation time how big a partition
might need to be in the future, LVM allows file systems to be easily resized later as needed.
3. Performing consistent backups by taking snapshots of the logical volumes.
4. Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID
0, but more similar to JBOD), allowing for dynamic volume resizing.
5. the Ganeti solution stack relies on the Linux Logical Volume Manager
6. LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an
abstraction of continuity and ease-of-use for managing hard drive replacement, re-partitioning, and backup.
THE LVM CAN:
1. Resize volume groups online by absorbing new physical volumes (PV) or ejecting existing ones.
2. Resize logical volumes (LV) online by concatenating extents onto them or truncating extents from them.
3. Create read-only snapshots of logical volumes (LVM1).
4. Create read-write snapshots of logical volumes (LVM2).
5. Create RAID logical volumes (available in newer LVM implementations): RAID 1, RAID 5, RAID 6, etc.
6. Stripe whole or parts of logical volumes across multiple PVs, in a fashion similar to RAID 0.
7. Configure a RAID 1 backend device (a PV) as write-mostly, resulting in reads being avoided to such devices.
8. Allocate thin-provisioned logical volumes from a pool.
9. Move online logical volumes between PVs.
Split or merge volume groups in situ (as long as no logical volumes span the split).
This can be useful when migrating whole logical volumes to or from offline storage.
10. Create hybrid volumes by using the dm-cache target, which allows one or more fast storage devices, such as
flash-based solid-state drives (SSDs), to act as a cache for one or more slower hard disk drives (HDDs).
16. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 16
CREATE A PHYSICAL VOLUME
Input Command
pvcreate -ff /dev/sdb
Output
Physical volume "/dev/sdb" successfully created
DISPLAY A STATUS OF PHYSICAL VOLUMES
Input Command
pvdisplay /dev/sdb
Output
"/dev/sdb" is a new physical volume of "150.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 150.00 GiB
CREATE A VOLUME GROUP
Input Command
vgcreate volg1 /dev/sdb
Output
Volume group "volg1" successfully created
DISPLAY VOLUME GROUPS
Input Command
vgdisplay
Output
--- Volume group ---
VG Name volg1
System ID
Format l vm2
17. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 17
VG Access read/write
VG Status resizable
VG Size 150.00 GiB
CREATE A LOGICAL VOLUME
Input Command
lvcreate -L 149G -n lv_data volg1
NOTE: create a Logical Volumes 'lv_data' as 150G in volume group 'vg_data'
Output
Logical volume "lv_data" created
DISPLAY STATUS OF LOGICAL VOLUMES
Input Command
lvdisplay
Output
--- Logical volume ---
LV Path /dev/volg1/lv_data
LV Name lv_data
VG Name vg_data
LV Write Access read/write
LV Status available
FORMATTING LOGICAL VOLUME BEFORE MOUNT IT.
Input Command
mkfs.ext4 /dev/volg1/lv_data
18. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 18
MOUNTING LOGICAL VOLUME INTO A SPECIFIC iSCSI FOLDER
Input Command
mkdir -p /u01/VM/iSCSI_shares
mount /dev/volg1/lv_data /u01/VM/iSCSI_shares
vim /etc/fstab
/dev/volg1/lv_data /u01/VM/iSCSI_shares ext4 defaults 0 0
CONFIGURING LSYNCD BACKUP SERVER AS A BACKUP KEEPER
Lsyncd is a tool I discovered a few weeks ago, it is a synchronization server based primarily on Rsync. It is a server
daemon that runs on the “master” server, and it can sync / mirror any file or directory changes within seconds into
your “slaves” servers, you can have as many slave servers as you want. Lsyncd is constantly watching a local directory
and monitoring file system changes using inotify / fsevents.
By default, lsyncd uses rsync to send the data over the slave machines, however there are other ways to do it.
It does not require you to build new filesystems or block devices, and does not harm your server I/O performance.
yum -y install lua lua-devel pkgconfig gcc asciidoc lsyncd rsync
@ 10.75.40.30/192.168.0.30
vim /etc/lsyncd.conf
settings={
logfile="/var/log/lsyncd.log",
statusFile="/tmp/lsyncd.stat",
statusInterval=1,
}
sync{
default.rsync,
source="/u01/VM/nfs_shares",
target="192.168.0.31:/u01/VM/nfs_shares",
rsync={rsh="/usr/bin/ssh -l root -i /root/.ssh/id_rsa",}
}
rsync = {
compress = true,
acls = true,
verbose = true,
owner = true,
group = true,
19. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 19
perms = true,
rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
service lsyncd start
chkconfig lsyncd on
mkdir -p /var/log/lsyncd
CONFIGURING iSCSI SERVER (10.75.40.30 / 192.168.0.30)
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g dba -G oinstall grid
useradd -u 1300 -g dba -G oinstall oracle
passwd grid
passwd oracle
mkdir -p /u01/VM/iSCSI_shares/shared_1
mkdir -p /u01/VM/iSCSI_shares/shared_2
mkdir -p /u01/VM/iSCSI_shares/shared_3
chown grid:dba /u01/VM/iSCSI_shares/shared_1
chown grid:dba /u01/VM/iSCSI_shares/shared_2
chown grid:dba /u01/VM/iSCSI_shares/shared_3
chmod +x /u01/VM/iSCSI_shares/shared_1
chmod +x /u01/VM/iSCSI_shares/shared_2
chmod +x /u01/VM/iSCSI_shares/shared_3
CONFIGURE ISCSI TARGET
A storage on a network is called iSCSI Target, a Client which connects to iSCSI Target is called iSCSI Initiator.
Install administration tools.
yum -y install scsi-target-utils
Configure iSCSI Target
For example, create a disk image under the [/u01/VM/iSCSI_shares] directory and set it as a shared disk.
dd if=/dev/zero of=/u01/VM/iSCSI_shares/shared_1/disk01.img count=0 bs=1 seek=50G
dd if=/dev/zero of=/u01/VM/iSCSI_shares/shared_2/disk02.img count=0 bs=1 seek=50G
dd if=/dev/zero of=/u01/VM/iSCSI_shares/shared_3/disk03.img count=0 bs=1 seek=50G
20. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 20
vim /etc/tgt/targets.conf
<target iqn.2016-12.192.168.0.30:target00>
# provided devicce as a iSCSI target
backing-store /u01/VM/iSCSI_shares/shared_1/disk01.img
backing-store /u01/VM/iSCSI_shares/shared_2/disk02.img
backing-store /u01/VM/iSCSI_shares/shared_3/disk03.img
# iSCSI Initiator's IP address you allow to connect
initiator-address 192.168.0.0/24
# authentication info ( set anyone you like for "username", "password" )
incominguser chanaka z80cpu
</target>
Start tgtd and verify status.
/etc/rc.d/init.d/tgtd start
chkconfig tgtd on
tgtadm --mode target --op show
CONFIGURING NFS MOUNT POINTS ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
Configure iSCSI Initiator.
yum -y install iscsi-initiator-utils
vim /etc/iscsi/iscsid.conf
# line 56: uncomment
node.session.auth.authmethod = CHAP
# line 60,61: uncomment and set username and password which set on iSCSI Target
node.session.auth.username = chanaka
node.session.auth.password = z80cpu
Discover target
iscsiadm -m discovery -t sendtargets -p 192.168.0.30
Confirm status after discovery
iscsiadm -m node -o show
Login to target
iscsiadm -m node –login
21. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 21
Show established session
iscsiadm -m session -o show
Show partitions
cat /proc/partitions
fdisk -l | grep Disk
Added new device provided from target as [sdb, sdc, sdd]
yum -y install parted
Create a label
parted --script /dev/sdb "mklabel msdos"
parted --script /dev/sdc "mklabel Msdos"
parted --script /dev/sdd "mklabel MSdos"
Create a partition
parted --script /dev/sdb "mkpart primary 0% 100%"
parted --script /dev/sdc "mkpart primary 0% 100%"
parted --script /dev/sdd "mkpart primary 0% 100%"
Format with EXT4
mkfs.ext4 /dev/sdb1
mkfs.ext4 /dev/sdc1
mkfs.ext4 /dev/sdd1
mount /dev/sdb1 /u01
mount /dev/sdc1 /u02
mount /dev/sdd1 /u03
df -hT
vim /etc/rc.local
iscsiadm -m discovery -t sendtargets -p 192.168.0.30
iscsiadm -m node --login
mount /dev/sdb1 /u01
mount /dev/sdc1 /u02
mount /dev/sdd1 /u03
init 6
22. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 22
INSTALLING ORACLE GRID ON BOTH NODES FROM 192.168.0.139
32. ORACLE CLUSTER INSTALLATION WITH GRID & ISCSI HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 32
INSTALLING ORACLE DATABASE ON BOTH NODES FROM 192.168.0.139