This document provides instructions for configuring Distributed Replicated Block Device (DRBD) to create a high availability cluster between two servers. It discusses mirroring a block device via the network to provide network-based RAID 1 functionality. The document outlines the steps to install and configure DRBD, including installing packages, configuring resources, initializing metadata storage, starting the DRBD service, and creating a filesystem on the mirrored block device. It also provides requirements for DRBD and a sample installation script.
This presentation is on the DRBD product. At eNovance, we're using it for several years. In those slides, you will find informations on how we use it, use cases and Ninja tricks.
This document has been realized with a lot of feedbacks and thanks to strong knowledges on that technology that eNovance is able to provide.
Actually any people and employers get high available in your applications, maintain ours environment always available not is easy task. In Open Source World exist tools that maje it possible. This appresentation is a module course UTAH NETWORXS about Hight Available and Performance Course. Utah Networxs is business school in Sao Paulo Brazil Work a Linux System for more than 17 years. Maked to Fabio Pires Directory Utah Networxs and Linux Specialist focusing Clusters and HA services.
This is to introduce the related components in SUSE Linux Enterprise High Availability Extension product to build High Available Storage (ha-lvm/drbd/iscsi/nfs, clvm, ocfs2, cluster-raid1).
This presentation is on the DRBD product. At eNovance, we're using it for several years. In those slides, you will find informations on how we use it, use cases and Ninja tricks.
This document has been realized with a lot of feedbacks and thanks to strong knowledges on that technology that eNovance is able to provide.
Actually any people and employers get high available in your applications, maintain ours environment always available not is easy task. In Open Source World exist tools that maje it possible. This appresentation is a module course UTAH NETWORXS about Hight Available and Performance Course. Utah Networxs is business school in Sao Paulo Brazil Work a Linux System for more than 17 years. Maked to Fabio Pires Directory Utah Networxs and Linux Specialist focusing Clusters and HA services.
This is to introduce the related components in SUSE Linux Enterprise High Availability Extension product to build High Available Storage (ha-lvm/drbd/iscsi/nfs, clvm, ocfs2, cluster-raid1).
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...Anne Nicolas
Understanding how Linux kernel IO subsystem works is a key to analysis of a wide variety of issues occurring when running a Linux system. This talk is aimed at helping Linux users understand what is going on and how to get more insight into what is happening.
First we present an overview of Linux kernel block layer including different IO schedulers. We also talk about a new block multiqueue implementation that gets used for more and more devices.
After surveying the basic architecture we will be prepared to talk about tools to peek into it. We start with lightweight monitoring like iostat and continue with more heavy blktrace and variety of tools that are based on it. We demonstrate use of the tools on analysis of real world issues.
Jan Kara, SUSE
RH202 CertMagic Exam contains all the questions and answers to pass RH202 IT Exam on first try. The Questions & answers are verified and selected by professionals in the field and ensure accuracy and efficiency throughout the whole Product
An introduction and evaluations of a wide area distributed storage systemHiroki Kashiwazaki
A presentation on Storage Developer Conference (SDC) 2014 in Santa Clara, California. General overview of distcloud until now and the future.
米カリフォルニア州サンタクララで開催された Storage Developer Conference 2014 での発表資料です。distcloud のこれまでとこれからの総括。
High Availability can be a curiously nebulous term, and most people probably don't care about it until they can't access their online banking service, or their plane crashes.
This presentation examines some of the considerations necessary when building highly available computer systems, then focuses on the HA infrastructure software currently available from the Corosync/OpenAIS, Linux-HA and Pacemaker projects.
Originally presented at Linux Users Victoria in April 2010 (http://luv.asn.au/2010/04/06)
Failover Cluster support in Windows Server 2008 R2 with Hyper-V provides a powerful mechanism to minimize the effects of planned and unplanned server downtime
It coordinates live migrations and failover of workloads between servers through a Cluster Shared Volume (CSV). The health of the cluster depends on maintaining continuous access to the CSV and the shared disk on which it resides.
In this paper you will learn how DataCore Software solves a longstanding stumbling block to clustered systems spread across metropolitan sites by providing uninterrupted access to the CSV despite the many technical and environmental conditions that conspire to disrupt it.
OpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt AhrensMatthew Ahrens
Guest lecture at Brown University's Computer Science Operating Systems class, CS167, by Matt Ahrens, co-creator of ZFS. Introduction by professor Tom Doeppner. Recording, March 2017: https://youtu.be/uJGkyMxdNFE
Topics:
- Data structures and algorithms used by ZFS snapshots
- Overview of ZFS on-disk structure
- Data structures used for ZFS space allocation
- RAID-Z compared with traditional RAID-4/5/6
Class website: http://cs.brown.edu/courses/cs167/
PostgreSQL and ZFS were made for each other. This talk dives downstack into the internals and way that PostgreSQL consumes disk resources and tricks that are available if you run PostgreSQL on ZFS (ZFS on Linux, ZFS on FreeBSD, or ZFS on Illumos). Topics covered will include:
* Performance and sizing considerations
* Workload estimation heuristics
* Standard administrative practices that leverage ZFS
* Recovery using ZFS
* Performing database migrations using ZFS
Presentation slides for running MySQL (InnoDB) on ZFS. Since most databases have analogues to optimisation targets mentioned, it is more broadly applicable.
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...Anne Nicolas
Understanding how Linux kernel IO subsystem works is a key to analysis of a wide variety of issues occurring when running a Linux system. This talk is aimed at helping Linux users understand what is going on and how to get more insight into what is happening.
First we present an overview of Linux kernel block layer including different IO schedulers. We also talk about a new block multiqueue implementation that gets used for more and more devices.
After surveying the basic architecture we will be prepared to talk about tools to peek into it. We start with lightweight monitoring like iostat and continue with more heavy blktrace and variety of tools that are based on it. We demonstrate use of the tools on analysis of real world issues.
Jan Kara, SUSE
RH202 CertMagic Exam contains all the questions and answers to pass RH202 IT Exam on first try. The Questions & answers are verified and selected by professionals in the field and ensure accuracy and efficiency throughout the whole Product
An introduction and evaluations of a wide area distributed storage systemHiroki Kashiwazaki
A presentation on Storage Developer Conference (SDC) 2014 in Santa Clara, California. General overview of distcloud until now and the future.
米カリフォルニア州サンタクララで開催された Storage Developer Conference 2014 での発表資料です。distcloud のこれまでとこれからの総括。
High Availability can be a curiously nebulous term, and most people probably don't care about it until they can't access their online banking service, or their plane crashes.
This presentation examines some of the considerations necessary when building highly available computer systems, then focuses on the HA infrastructure software currently available from the Corosync/OpenAIS, Linux-HA and Pacemaker projects.
Originally presented at Linux Users Victoria in April 2010 (http://luv.asn.au/2010/04/06)
Failover Cluster support in Windows Server 2008 R2 with Hyper-V provides a powerful mechanism to minimize the effects of planned and unplanned server downtime
It coordinates live migrations and failover of workloads between servers through a Cluster Shared Volume (CSV). The health of the cluster depends on maintaining continuous access to the CSV and the shared disk on which it resides.
In this paper you will learn how DataCore Software solves a longstanding stumbling block to clustered systems spread across metropolitan sites by providing uninterrupted access to the CSV despite the many technical and environmental conditions that conspire to disrupt it.
OpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt AhrensMatthew Ahrens
Guest lecture at Brown University's Computer Science Operating Systems class, CS167, by Matt Ahrens, co-creator of ZFS. Introduction by professor Tom Doeppner. Recording, March 2017: https://youtu.be/uJGkyMxdNFE
Topics:
- Data structures and algorithms used by ZFS snapshots
- Overview of ZFS on-disk structure
- Data structures used for ZFS space allocation
- RAID-Z compared with traditional RAID-4/5/6
Class website: http://cs.brown.edu/courses/cs167/
PostgreSQL and ZFS were made for each other. This talk dives downstack into the internals and way that PostgreSQL consumes disk resources and tricks that are available if you run PostgreSQL on ZFS (ZFS on Linux, ZFS on FreeBSD, or ZFS on Illumos). Topics covered will include:
* Performance and sizing considerations
* Workload estimation heuristics
* Standard administrative practices that leverage ZFS
* Recovery using ZFS
* Performing database migrations using ZFS
Presentation slides for running MySQL (InnoDB) on ZFS. Since most databases have analogues to optimisation targets mentioned, it is more broadly applicable.
Introduction to Real Application Cluster
RAC - Savior of DBA
Oracle Clusterware (Platform on Platform)
RAC Startup sequence
RAC Architecture
RAC Components
Single Instance on RAC
Node Eviction
Important Log directories in RAC.
Tips to monitor and improve the RAC environment.
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBITOpenNebula Project
You will learn what DRBD is, where it came from in its 15 years of existence. How it evolved into a software defined storage solution interesting for users of OpenNebula and why it is very well suited for hyperconverged deployment architectures. The presentation will contain IO performance results and (if time permits) a live demo.
INTELLIGENT DISK SUBSYSTEMS – 2, I/O TECHNIQUES – 1
Caching: Acceleration of Hard Disk Access; Intelligent disk subsystems; Availability of disk subsystems. The Physical I/O path from the CPU to the Storage System; SCSI.
I/O TECHNIQUES – 2, NETWORK ATTACHED STORAGE
Fibre Channel Protocol Stack; Fibre Channel SAN; IP Storage. The NAS Architecture, The NAS hardware Architecture, The NAS Software Architecture, Network connectivity, NAS as a storage system.
Hadoop Interview Questions and Answers by rohit kapakapa rohit
Hadoop Interview Questions and Answers - More than 130 real time questions and answers covering hadoop hdfs,mapreduce and administrative concepts by rohit kapa
Some of the common interview questions asked during a Big Data Hadoop Interview. These may apply to Hadoop Interviews. Be prepared with answers for the interview questions below when you prepare for an interview. Also have an example to explain how you worked on various interview questions asked below. Hadoop Developers are expected to have references and be able to explain from their past experiences. All the Best for a successful career as a Hadoop Developer!
Building Apache Cassandra clusters for massive scaleAlex Thompson
Covering theory and operational aspects of bring up Apache Cassandra clusters - this presentation can be used as a field reference. Presented by Alex Thompson at the Sydney Cassandra Meetup.
XPDS13: VIRTUAL DISK INTEGRITY IN REAL TIME JP BLAKE, ASSURED INFORMATION SE...The Linux Foundation
This paper introduces the Virtual Disk Integrity in Real Time (vDIRT) monitor, a mechanism to measure virtual hard disks in real time from the Dom0 trusted computing base. vDIRT is an improvement over traditional methods for auditing file integrity which rely on a service in a potentially compromised host. It also overcomes the limitations of existing methods for assuring disk integrity that are coarse grained and do not scale to large disks. vDIRT is a capability to measure disk reads and writes in real time, allowing for fine grained tracking of sectors within files, as well as the overall disk. The vDIRT implementation and its impact on performance is discussed to show that disk operation monitoring from Dom0 is practical.
ERP System Implementation Kubernetes Cluster with Sticky Sessions Chanaka Lasantha
ERP System Implementation on Kubernetes Cluster with Sticky Sessions:
01. Security Features Enabled in Kubernetes Cluster.
02. SNMP, Syslog and audit logs enabled.
03. Enabled ERP no login service user.
04. Auto-scaling enabled both ESB and Jboss Pods.
05. Reduced power consumption using the scale in future during off-peak days.
06. NFS enables s usual with ERP service user.
07. External Ingress( Load Balance enabled).
08. Cluster load balancer enabled by default.
09. SSH enabled via both putty.exe and Kubernetes management console.
10. Network Monitoring enabled on Kubernetes dashboard.
11. Isolated Private and external network ranges to protect backend servers (pods).
12. OS of the pos is updated with the latest kernel version.
13. Core Linux OS will reduce security threats.
14. Lightweight OS over small HDD space
15. Less amount of RAM usage has been enabled.
16. AWS ready.
17. Possible for exporting into Public cloud ENV.
18. L7 and L4 Heavy Load Balancing Enabled.
19. Snapshot Versioning Control Enabled.
20. Many More ………etc.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
2. DRBD Page 2
DRBD refers to block devices designed as a building block to form high availability (HA) clusters - Distributed Replicated
Block Device This is done by mirroring a whole block device via an assigned network. Distributed Replicated Block Device
can be understood as network based raid-1.
DRBD refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a
whole block device via an assigned network. DRBD can be understood as network based raid-1.
In the illustration above, the two orange boxes represent two servers that form an HA cluster. The boxes contain the usual
components of a Linux™ kernel: file system, buffer cache, disk scheduler, disk drivers, TCP/IP stack and network interface
card (NIC) driver. The black arrows illustrate the flow of data between these components.
The orange arrows show the flow of data, as DRBD mirrors the data of a highly available service from the active node of the
HA cluster to the standby node of the HA cluster.
The upper part of this picture shows a cluster where the left node is currently active, i.e., the service's IP address that the
client machines are talking to is currently on the left node.
3. DRBD Page 3
The service, including its IP address, can be migrated to the other node at any time, either due to a failure of the active
node or as an administrative action. The lower part of the illustration shows a degraded cluster. In HA speak the migration
of a service is called failover, the reverse process is called failback and when the migration is triggered by an administrator
it is called switchover.
What DRBD Does?
Mirroring of important data
DRBD works on top of block devices, i.e., hard disk partitions or LVM's logical volumes. It mirrors each data block that it is
written to disk to the peer node.
From fully synchronous
Mirroring can be done tightly coupled (synchronous). That means that the file system on the active node is notified that the
writing of the block was finished only when the block made it to both disks of the cluster.
Synchronous mirroring (called protocol C in DRBD speak) is the right choice for HA clusters where you dare not lose a single
transaction in case of the complete crash of the active (primary in DRBD speak) node.
To asynchronous
The other option is asynchronous mirroring. That means that the entity that issued the write requests is informed about
completion as soon as the data is written to the local disk.
Asynchronous mirroring is necessary to build mirrors over long distances, i.e., the interconnecting network's round trip
time is higher than the write latency you can tolerate for your application. (Note: The amount of data the peer node may
fall behind is limited by bandwidth-delay product and the TCP send buffer.)
Data accessible only on the active node
A consequence of mirroring data on block device level is that you can access your data (using a file system) only on the
active node. This is not a shortcoming of DRBD but is caused by the nature of most file systems (ext3, XFS, JFS, ext4 ...).
These file systems are designed for one computer accessing one disk, so they cannot cope with two computers accessing
one (virtually) shared disk.
In spite of this limitation, there are still a few ways to access the data on the second node:
Use DRBD on logical volumes and use LVM's capabilities to take snapshots on the standby node, and access the
data via the snapshot.
DRBD's primary-primary mode with a shared disk files system (GFS, OCFS2). These systems are very sensitive to
failures of the replication network.
4. DRBD Page 4
What DRBD Does After an Outage?
After a node outage
After an outage of a node DRBD automatically resynchronizes the temporarily unavailable node to the latest version of the
data, in the background, without interfering with the service running. Of course this also works if the role of the surviving
node was changed while the peer was down.
In case a complete power outage takes both nodes down, DRBD will detect which of the nodes was down longer, and will
do the resynchronization in the right direction.
After an outage of the replication network
Restoring service after the temporary failure of the replication network is just a typical example of how the automatic
recovery mechanism just described works. DRBD will reestablish the connection and do the necessary resynchronization
automatically.
After an outage of a storage subsystem
DRBD can mask the failure of a disk on the active node, i.e., the service can continue to run there, without needing to
failover the service. If the disk can be replaced without shutting down the machine, it can be reattached to DRBD. DRBD
resynchronizes the data as needed to the replacement disk.
After an outage of all network links
DRBD supports you with various automatic and manual recovery options in the event of split brain.
Split brain is a situation where, due to the temporary failure of all network links between cluster nodes, and possibly due to
intervention by cluster management software or human error, both nodes switched to the primary role while
disconnected. This is a potentially harmful state, as it implies that modifications to the data might have been made on
either node, without having been replicated to the peer. Thus, it is likely in this situation that two diverging sets of data
have been created that cannot be merged.
Distributed Replicated Block Device is actually a network based RAID 1. You are configuring DRBD on your system if you:
Need to secure data on certain disk and are therefore mirroring your data to another machine via network.
Configuring High Availability cluster or service.
REQUIREMENTS:
additional disk for synchronization on BOTH MACHINES (preferably same size)
network connectivity between machines
working DNS resolution (can fix with /etc/hosts file)
NTP synchronized times on both nodes
5. DRBD Page 5
NTP synchronized times on both nodes(configure on both nodes)
yum -y install ntp
vim /etc/ntp.conf
# line 19: add the network range you allow to receive requests
restrict 10.0.0.0 mask 255.255.255.0 nomodify notrap
# change servers for synchronization
#server 0.rhel.pool.ntp.org
#server 1.rhel.pool.ntp.org
#server 2.rhel.pool.ntp.org
server 0.asia.pool.ntp.org
server 1.asia.pool.ntp.org
server 2.asia.pool.ntp.org
server 3.asia.pool.ntp.org
/etc/rc.d/init.d/ntpd start
chkconfig ntpd on
ntpq -p
1. BOTH MACHINES: Install EPEL repository on your system.
Date:
date -s "9 AUG 2013 11:32:08"
Import the public key:
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
To install ELRepo for RHEL-6, SL-6 or CentOS-6:
rpm -Uvh http://www.elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
2. BOTH MACHINES: Install Distributed Replicated Block Device utils and kmod packages from EPEL
Choose the version you prefer – drbd83 or drbd84 – i’ve had problems with drbd84 on kernel 2.6.32-358.6.1.el6.i686).
yum install -y kmod-drbd83 drbd83-utils
3. BOTH MACHINES: Insert drbd module manually or just reboot both machines.
/sbin/modprobe drbd
4. BOTH MACHINES: Create the Distributed Replicated Block Device resource file (/etc/drbd.d/disk1.res) and transfer it to
the other machine (these files need to be exactly the same on both machines!).
vim /etc/drbd.d/disk1.res
resource disk1
{
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 30;
6. DRBD Page 6
}
net {
cram-hmac-alg sha1;
shared-secret sync_disk;
}
syncer {
rate 100M;
verify-alg sha1;
}
on node1.chanaka.net {
device minor 1;
disk /dev/sdb;
address 192.168.1.100:7789;
meta-disk internal;
}
on node2.chanaka.net {
device minor 1;
disk /dev/sdb;
address 192.168.1.101:7789;
meta-disk internal;
}
}
5. BOTH MACHINES: Make sure that DNS resolution is working as expected!
To quickly fix DNS resolutions add IP addresses FQDN to /etc/hosts on both machines as follows:
vim /etc/hosts
192.168.1.100 node1.chanaka.net
192.168.1.101 node2.chanaka.net
6. BOTH MACHINES: Make sure that both machines are using NTP for time synchronization!
To quickly fix this add an entry to your /etc/crontab file as follows and choose your NTP sync server:
vim /etc/crontab
Or
crontab -e
1 * * * * root ntpdate your.ntp.server
7. BOTH MACHINES: Initialize the DRBD Meta data storage:
/sbin/drbdadm create-md disk1
8. BOTH MACHINES: Start the Distributed Replicated Block Device service on both nodes:
/etc/init.d/drbd start
9. On the node you wish to make a PRIMARY node run drbdadm command:
/sbin/drbdadm — –overwrite-data-of-peer primary disk1
7. DRBD Page 7
10. Wait for the Distributed Replicated Block Device disk initial synchronization to complete (100%) and check to confirm you are on
primary node:
cat /proc/drbd
11. Create desired filesystem on Distributed Replicated Block Device device:
/sbin/mkfs.ext4 /dev/drbd1
DRBD Installation Script
#!/bin/sh
# drbd83-install-v01.sh (30 May 2013)
# GeekPeek.Net scripts - Configure and install drbd83 on CentOS 6.X script
# INFO: This script was tested on CentOS 6.4 minimal installation. The script installs and configures
# DRBD 83. It installs ELRepo and drbd83-utils and kmod-drbd83 packages. It inserts drbd
# module and creates drbd resource configuration file. It creates drbd device and EXT4 filesystem on it.
# It adds two new lines to /etc/hosts file and creates new file /etc/cron.hourly/ntpsync.
# All of the actions are done on both of the DRBD nodes so SSH key is generated and transferred for
# easier configuration!
# CODE:
echo "For this script to work as expected, you need to enable root SSH access on the second machine."
echo "Is SSH root access enabled on the second machine? (y/n)"
read rootssh
case $rootssh in
y)
echo "Please enter the second machine IP address."
read ipaddr2
echo "Generating SSH key - press Enter a couple of times..."
/usr/bin/ssh-keygen
echo "Copying SSH key to the second machine..."
echo "Please enter root password for the second machine."
/usr/bin/ssh-copy-id root@$ipaddr2
8. DRBD Page 8
echo "Succesfully set up SSH with key authentication...continuing with package installation on both machines..."
;;
n)
echo "Root access must be enabled on the second machine...exiting!"
exit 1
;;
esac
/bin/rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
/usr/bin/ssh root@$ipaddr2 /bin/rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
/usr/bin/yum install -y kmod-drbd83 drbd83-utils ntpdate
/usr/bin/ssh root@$ipaddr2 /usr/bin/yum install -y kmod-drbd83 drbd83-utils ntpdate
/sbin/modprobe drbd
/usr/bin/ssh root@$ipaddr2 /sbin/modprobe drbd
echo "Creating DRBD resource config file - need some additional INFO."
echo "..."
echo "Which DRBD device is this on your machines - talking about /dev/drbd1, /dev/drbd2,... (example: 1)"
read drbdnum
echo "Enter FQDN of your current machine (example: foo1.geekpeek.net):"
read fqdn1
echo "Enter current machine IP address (example: 192.168.1.100):"
read ipaddr1
echo "Enter current machine disk intended for DRBD (example: /dev/sdb):"
read disk1
echo "Enter FQDN of your second machine (example: foo2.geekpeek.net):"
read fqdn2
echo "Enter second machine IP address (example: 192.168.1.101):"
read ipaddr2
echo "Enter second machine disk intended for DRBD (example: /dev/sdb):"