SlideShare a Scribd company logo
DRBD Page 1
Distributed Replicated Block Device – DRBD
chanaka.lasantha@gmial.com
DATE: 17TH FEB 2014
DRBD Page 2
DRBD refers to block devices designed as a building block to form high availability (HA) clusters - Distributed Replicated
Block Device This is done by mirroring a whole block device via an assigned network. Distributed Replicated Block Device
can be understood as network based raid-1.
DRBD refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a
whole block device via an assigned network. DRBD can be understood as network based raid-1.
In the illustration above, the two orange boxes represent two servers that form an HA cluster. The boxes contain the usual
components of a Linux™ kernel: file system, buffer cache, disk scheduler, disk drivers, TCP/IP stack and network interface
card (NIC) driver. The black arrows illustrate the flow of data between these components.
The orange arrows show the flow of data, as DRBD mirrors the data of a highly available service from the active node of the
HA cluster to the standby node of the HA cluster.
The upper part of this picture shows a cluster where the left node is currently active, i.e., the service's IP address that the
client machines are talking to is currently on the left node.
DRBD Page 3
The service, including its IP address, can be migrated to the other node at any time, either due to a failure of the active
node or as an administrative action. The lower part of the illustration shows a degraded cluster. In HA speak the migration
of a service is called failover, the reverse process is called failback and when the migration is triggered by an administrator
it is called switchover.
What DRBD Does?
Mirroring of important data
DRBD works on top of block devices, i.e., hard disk partitions or LVM's logical volumes. It mirrors each data block that it is
written to disk to the peer node.
From fully synchronous
Mirroring can be done tightly coupled (synchronous). That means that the file system on the active node is notified that the
writing of the block was finished only when the block made it to both disks of the cluster.
Synchronous mirroring (called protocol C in DRBD speak) is the right choice for HA clusters where you dare not lose a single
transaction in case of the complete crash of the active (primary in DRBD speak) node.
To asynchronous
The other option is asynchronous mirroring. That means that the entity that issued the write requests is informed about
completion as soon as the data is written to the local disk.
Asynchronous mirroring is necessary to build mirrors over long distances, i.e., the interconnecting network's round trip
time is higher than the write latency you can tolerate for your application. (Note: The amount of data the peer node may
fall behind is limited by bandwidth-delay product and the TCP send buffer.)
Data accessible only on the active node
A consequence of mirroring data on block device level is that you can access your data (using a file system) only on the
active node. This is not a shortcoming of DRBD but is caused by the nature of most file systems (ext3, XFS, JFS, ext4 ...).
These file systems are designed for one computer accessing one disk, so they cannot cope with two computers accessing
one (virtually) shared disk.
In spite of this limitation, there are still a few ways to access the data on the second node:
 Use DRBD on logical volumes and use LVM's capabilities to take snapshots on the standby node, and access the
data via the snapshot.
 DRBD's primary-primary mode with a shared disk files system (GFS, OCFS2). These systems are very sensitive to
failures of the replication network.
DRBD Page 4
What DRBD Does After an Outage?
After a node outage
After an outage of a node DRBD automatically resynchronizes the temporarily unavailable node to the latest version of the
data, in the background, without interfering with the service running. Of course this also works if the role of the surviving
node was changed while the peer was down.
In case a complete power outage takes both nodes down, DRBD will detect which of the nodes was down longer, and will
do the resynchronization in the right direction.
After an outage of the replication network
Restoring service after the temporary failure of the replication network is just a typical example of how the automatic
recovery mechanism just described works. DRBD will reestablish the connection and do the necessary resynchronization
automatically.
After an outage of a storage subsystem
DRBD can mask the failure of a disk on the active node, i.e., the service can continue to run there, without needing to
failover the service. If the disk can be replaced without shutting down the machine, it can be reattached to DRBD. DRBD
resynchronizes the data as needed to the replacement disk.
After an outage of all network links
DRBD supports you with various automatic and manual recovery options in the event of split brain.
Split brain is a situation where, due to the temporary failure of all network links between cluster nodes, and possibly due to
intervention by cluster management software or human error, both nodes switched to the primary role while
disconnected. This is a potentially harmful state, as it implies that modifications to the data might have been made on
either node, without having been replicated to the peer. Thus, it is likely in this situation that two diverging sets of data
have been created that cannot be merged.
Distributed Replicated Block Device is actually a network based RAID 1. You are configuring DRBD on your system if you:
 Need to secure data on certain disk and are therefore mirroring your data to another machine via network.
 Configuring High Availability cluster or service.
REQUIREMENTS:
 additional disk for synchronization on BOTH MACHINES (preferably same size)
 network connectivity between machines
 working DNS resolution (can fix with /etc/hosts file)
 NTP synchronized times on both nodes
DRBD Page 5
NTP synchronized times on both nodes(configure on both nodes)
yum -y install ntp
vim /etc/ntp.conf
# line 19: add the network range you allow to receive requests
restrict 10.0.0.0 mask 255.255.255.0 nomodify notrap
# change servers for synchronization
#server 0.rhel.pool.ntp.org
#server 1.rhel.pool.ntp.org
#server 2.rhel.pool.ntp.org
server 0.asia.pool.ntp.org
server 1.asia.pool.ntp.org
server 2.asia.pool.ntp.org
server 3.asia.pool.ntp.org
/etc/rc.d/init.d/ntpd start
chkconfig ntpd on
ntpq -p
1. BOTH MACHINES: Install EPEL repository on your system.
Date:
date -s "9 AUG 2013 11:32:08"
Import the public key:
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
To install ELRepo for RHEL-6, SL-6 or CentOS-6:
rpm -Uvh http://www.elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
2. BOTH MACHINES: Install Distributed Replicated Block Device utils and kmod packages from EPEL
Choose the version you prefer – drbd83 or drbd84 – i’ve had problems with drbd84 on kernel 2.6.32-358.6.1.el6.i686).
yum install -y kmod-drbd83 drbd83-utils
3. BOTH MACHINES: Insert drbd module manually or just reboot both machines.
/sbin/modprobe drbd
4. BOTH MACHINES: Create the Distributed Replicated Block Device resource file (/etc/drbd.d/disk1.res) and transfer it to
the other machine (these files need to be exactly the same on both machines!).
vim /etc/drbd.d/disk1.res
resource disk1
{
startup {
wfc-timeout 30;
outdated-wfc-timeout 20;
degr-wfc-timeout 30;
DRBD Page 6
}
net {
cram-hmac-alg sha1;
shared-secret sync_disk;
}
syncer {
rate 100M;
verify-alg sha1;
}
on node1.chanaka.net {
device minor 1;
disk /dev/sdb;
address 192.168.1.100:7789;
meta-disk internal;
}
on node2.chanaka.net {
device minor 1;
disk /dev/sdb;
address 192.168.1.101:7789;
meta-disk internal;
}
}
5. BOTH MACHINES: Make sure that DNS resolution is working as expected!
To quickly fix DNS resolutions add IP addresses FQDN to /etc/hosts on both machines as follows:
vim /etc/hosts
192.168.1.100 node1.chanaka.net
192.168.1.101 node2.chanaka.net
6. BOTH MACHINES: Make sure that both machines are using NTP for time synchronization!
To quickly fix this add an entry to your /etc/crontab file as follows and choose your NTP sync server:
vim /etc/crontab
Or
crontab -e
1 * * * * root ntpdate your.ntp.server
7. BOTH MACHINES: Initialize the DRBD Meta data storage:
/sbin/drbdadm create-md disk1
8. BOTH MACHINES: Start the Distributed Replicated Block Device service on both nodes:
/etc/init.d/drbd start
9. On the node you wish to make a PRIMARY node run drbdadm command:
/sbin/drbdadm — –overwrite-data-of-peer primary disk1
DRBD Page 7
10. Wait for the Distributed Replicated Block Device disk initial synchronization to complete (100%) and check to confirm you are on
primary node:
cat /proc/drbd
11. Create desired filesystem on Distributed Replicated Block Device device:
/sbin/mkfs.ext4 /dev/drbd1
DRBD Installation Script
#!/bin/sh
# drbd83-install-v01.sh (30 May 2013)
# GeekPeek.Net scripts - Configure and install drbd83 on CentOS 6.X script
# INFO: This script was tested on CentOS 6.4 minimal installation. The script installs and configures
# DRBD 83. It installs ELRepo and drbd83-utils and kmod-drbd83 packages. It inserts drbd
# module and creates drbd resource configuration file. It creates drbd device and EXT4 filesystem on it.
# It adds two new lines to /etc/hosts file and creates new file /etc/cron.hourly/ntpsync.
# All of the actions are done on both of the DRBD nodes so SSH key is generated and transferred for
# easier configuration!
# CODE:
echo "For this script to work as expected, you need to enable root SSH access on the second machine."
echo "Is SSH root access enabled on the second machine? (y/n)"
read rootssh
case $rootssh in
y)
echo "Please enter the second machine IP address."
read ipaddr2
echo "Generating SSH key - press Enter a couple of times..."
/usr/bin/ssh-keygen
echo "Copying SSH key to the second machine..."
echo "Please enter root password for the second machine."
/usr/bin/ssh-copy-id root@$ipaddr2
DRBD Page 8
echo "Succesfully set up SSH with key authentication...continuing with package installation on both machines..."
;;
n)
echo "Root access must be enabled on the second machine...exiting!"
exit 1
;;
esac
/bin/rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
/usr/bin/ssh root@$ipaddr2 /bin/rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
/usr/bin/yum install -y kmod-drbd83 drbd83-utils ntpdate
/usr/bin/ssh root@$ipaddr2 /usr/bin/yum install -y kmod-drbd83 drbd83-utils ntpdate
/sbin/modprobe drbd
/usr/bin/ssh root@$ipaddr2 /sbin/modprobe drbd
echo "Creating DRBD resource config file - need some additional INFO."
echo "..."
echo "Which DRBD device is this on your machines - talking about /dev/drbd1, /dev/drbd2,... (example: 1)"
read drbdnum
echo "Enter FQDN of your current machine (example: foo1.geekpeek.net):"
read fqdn1
echo "Enter current machine IP address (example: 192.168.1.100):"
read ipaddr1
echo "Enter current machine disk intended for DRBD (example: /dev/sdb):"
read disk1
echo "Enter FQDN of your second machine (example: foo2.geekpeek.net):"
read fqdn2
echo "Enter second machine IP address (example: 192.168.1.101):"
read ipaddr2
echo "Enter second machine disk intended for DRBD (example: /dev/sdb):"
DRBD Page 9
read disk2
echo "Enter suitable NTP server for time synchronization (example: ntp1.arnes.si):"
read ntpserver
echo "Creating DRBD configuration file..."
echo "resource disk$drbdnum" >> /etc/drbd.d/disk$drbdnum.res
echo "{" >> /etc/drbd.d/disk$drbdnum.res
echo " startup {" >> /etc/drbd.d/disk$drbdnum.res
echo " wfc-timeout 30;" >> /etc/drbd.d/disk$drbdnum.res
echo " outdated-wfc-timeout 20;" >> /etc/drbd.d/disk$drbdnum.res
echo " degr-wfc-timeout 30;" >> /etc/drbd.d/disk$drbdnum.res
echo " }" >> /etc/drbd.d/disk$drbdnum.res
echo " net {" >> /etc/drbd.d/disk$drbdnum.res
echo " cram-hmac-alg sha1;" >> /etc/drbd.d/disk$drbdnum.res
echo " shared-secret "sync_disk";" >> /etc/drbd.d/disk$drbdnum.res
echo " }" >> /etc/drbd.d/disk$drbdnum.res
echo " syncer {" >> /etc/drbd.d/disk$drbdnum.res
echo " rate 100M;" >> /etc/drbd.d/disk$drbdnum.res
echo " verify-alg sha1;" >> /etc/drbd.d/disk$drbdnum.res
echo " }" >> /etc/drbd.d/disk$drbdnum.res
echo " on $fqdn1 {" >> /etc/drbd.d/disk$drbdnum.res
echo " device minor $drbdnum;" >> /etc/drbd.d/disk$drbdnum.res
echo " disk $disk1;" >> /etc/drbd.d/disk$drbdnum.res
echo " address $ipaddr1:7789;" >> /etc/drbd.d/disk$drbdnum.res
echo " meta-disk internal;" >> /etc/drbd.d/disk$drbdnum.res
echo " }" >> /etc/drbd.d/disk$drbdnum.res
echo " on $fqdn2 {" >> /etc/drbd.d/disk$drbdnum.res
echo " device minor $drbdnum;" >> /etc/drbd.d/disk$drbdnum.res
echo " disk $disk2;" >> /etc/drbd.d/disk$drbdnum.res
DRBD Page 10
echo " address $ipaddr2:7789;" >> /etc/drbd.d/disk$drbdnum.res
echo " meta-disk internal;" >> /etc/drbd.d/disk$drbdnum.res
echo " }" >> /etc/drbd.d/disk$drbdnum.res
echo "}" >> /etc/drbd.d/disk$drbdnum.res
echo "DRBD configuration file created /etc/drbd.d/disk$drbdnum"
echo "$ipaddr1 $fqdn1" >> /etc/hosts
echo "$ipaddr2 $fqdn2" >> /etc/hosts
/usr/bin/scp /etc/hosts root@$ipaddr2:/etc/
echo "ntpdate $ntpserver" >> /etc/cron.hourly/ntpsync
/bin/chmod +x /etc/cron.hourly/ntpsync
/usr/bin/scp /etc/cron.hourly/ntpsync root@$ipaddr2:/etc/cron.hourly/
/usr/bin/ssh root@$ipaddr2 echo "1 * * * * root ntpdate $ntpserver" >> /etc/crontab
/usr/bin/ssh root@$ipaddr2 echo "$ipaddr1 $fqdn1" >> /etc/hosts
/usr/bin/ssh root@$ipaddr2 echo "$ipaddr2 $fqdn2" >> /etc/hosts
/usr/bin/scp /etc/drbd.d/disk$drbdnum.res root@$ipaddr2:/etc/drbd.d/
/sbin/drbdadm create-md disk$drbdnum
/usr/bin/ssh root@$ipaddr2 /sbin/drbdadm create-md disk$drbdnum
/usr/bin/ssh root@$ipaddr2 /etc/init.d/drbd start &
/etc/init.d/drbd start
/sbin/drbdadm -- --overwrite-data-of-peer primary disk$drbdnum
/sbin/mkfs.ext4 /dev/drbd$drbdnum
sleep 5
/bin/cat /proc/drbd
echo "DRBD configuration completed! Please wait for the disk synchronization to complete..."
echo "...then you can now mount your DRBD disk on primary node!"

More Related Content

What's hot

Cgroup resource mgmt_v1
Cgroup resource mgmt_v1Cgroup resource mgmt_v1
Cgroup resource mgmt_v1sprdd
 
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...
Anne Nicolas
 
Rh202 q&a-demo-cert magic
Rh202 q&a-demo-cert magicRh202 q&a-demo-cert magic
Rh202 q&a-demo-cert magic
Ellina Beckman
 
Disaster recovery of OpenStack Cinder using DRBD
Disaster recovery of OpenStack Cinder using DRBDDisaster recovery of OpenStack Cinder using DRBD
Disaster recovery of OpenStack Cinder using DRBDViswesuwara Nathan
 
An introduction and evaluations of a wide area distributed storage system
An introduction and evaluations of  a wide area distributed storage systemAn introduction and evaluations of  a wide area distributed storage system
An introduction and evaluations of a wide area distributed storage system
Hiroki Kashiwazaki
 
Gluster fs tutorial part 2 gluster and big data- gluster for devs and sys ...
Gluster fs tutorial   part 2  gluster and big data- gluster for devs and sys ...Gluster fs tutorial   part 2  gluster and big data- gluster for devs and sys ...
Gluster fs tutorial part 2 gluster and big data- gluster for devs and sys ...Tommy Lee
 
Romanticos com drbd 2
Romanticos com drbd 2Romanticos com drbd 2
Romanticos com drbd 2eiichi2009
 
High Availability in 37 Easy Steps
High Availability in 37 Easy StepsHigh Availability in 37 Easy Steps
High Availability in 37 Easy Steps
Tim Serong
 
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Gluster.org
 
Uninterrupted access to Cluster Shared volumes (CSVs) Synchronously Mirrored ...
Uninterrupted access to Cluster Shared volumes (CSVs) Synchronously Mirrored ...Uninterrupted access to Cluster Shared volumes (CSVs) Synchronously Mirrored ...
Uninterrupted access to Cluster Shared volumes (CSVs) Synchronously Mirrored ...
DataCore Software
 
OpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt Ahrens
OpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt AhrensOpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt Ahrens
OpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt Ahrens
Matthew Ahrens
 
Vx vm
Vx vmVx vm
Rhel cluster gfs_improveperformance
Rhel cluster gfs_improveperformanceRhel cluster gfs_improveperformance
Rhel cluster gfs_improveperformancesprdd
 
Linux High Availability Overview - openSUSE.Asia Summit 2015
Linux High Availability Overview - openSUSE.Asia Summit 2015 Linux High Availability Overview - openSUSE.Asia Summit 2015
Linux High Availability Overview - openSUSE.Asia Summit 2015
Roger Zhou 周志强
 
PostgreSQL + ZFS best practices
PostgreSQL + ZFS best practicesPostgreSQL + ZFS best practices
PostgreSQL + ZFS best practices
Sean Chittenden
 
Glusterfs for sysadmins-justin_clift
Glusterfs for sysadmins-justin_cliftGlusterfs for sysadmins-justin_clift
Glusterfs for sysadmins-justin_clift
Gluster.org
 
GlusterFS CTDB Integration
GlusterFS CTDB IntegrationGlusterFS CTDB Integration
GlusterFS CTDB IntegrationEtsuji Nakai
 
First Responder Course - Session 10 - Static Evidence Collection [2004]
First Responder Course - Session 10 - Static Evidence Collection [2004]First Responder Course - Session 10 - Static Evidence Collection [2004]
First Responder Course - Session 10 - Static Evidence Collection [2004]
Phil Huggins FBCS CITP
 
MySQL on ZFS
MySQL on ZFSMySQL on ZFS
MySQL on ZFS
Gordan Bobic
 
OpenZFS data-driven performance
OpenZFS data-driven performanceOpenZFS data-driven performance
OpenZFS data-driven performance
ahl0003
 

What's hot (20)

Cgroup resource mgmt_v1
Cgroup resource mgmt_v1Cgroup resource mgmt_v1
Cgroup resource mgmt_v1
 
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...
Kernel Recipes 2015: Linux Kernel IO subsystem - How it works and how can I s...
 
Rh202 q&a-demo-cert magic
Rh202 q&a-demo-cert magicRh202 q&a-demo-cert magic
Rh202 q&a-demo-cert magic
 
Disaster recovery of OpenStack Cinder using DRBD
Disaster recovery of OpenStack Cinder using DRBDDisaster recovery of OpenStack Cinder using DRBD
Disaster recovery of OpenStack Cinder using DRBD
 
An introduction and evaluations of a wide area distributed storage system
An introduction and evaluations of  a wide area distributed storage systemAn introduction and evaluations of  a wide area distributed storage system
An introduction and evaluations of a wide area distributed storage system
 
Gluster fs tutorial part 2 gluster and big data- gluster for devs and sys ...
Gluster fs tutorial   part 2  gluster and big data- gluster for devs and sys ...Gluster fs tutorial   part 2  gluster and big data- gluster for devs and sys ...
Gluster fs tutorial part 2 gluster and big data- gluster for devs and sys ...
 
Romanticos com drbd 2
Romanticos com drbd 2Romanticos com drbd 2
Romanticos com drbd 2
 
High Availability in 37 Easy Steps
High Availability in 37 Easy StepsHigh Availability in 37 Easy Steps
High Availability in 37 Easy Steps
 
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
Performance bottlenecks for metadata workload in Gluster with Poornima Gurusi...
 
Uninterrupted access to Cluster Shared volumes (CSVs) Synchronously Mirrored ...
Uninterrupted access to Cluster Shared volumes (CSVs) Synchronously Mirrored ...Uninterrupted access to Cluster Shared volumes (CSVs) Synchronously Mirrored ...
Uninterrupted access to Cluster Shared volumes (CSVs) Synchronously Mirrored ...
 
OpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt Ahrens
OpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt AhrensOpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt Ahrens
OpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt Ahrens
 
Vx vm
Vx vmVx vm
Vx vm
 
Rhel cluster gfs_improveperformance
Rhel cluster gfs_improveperformanceRhel cluster gfs_improveperformance
Rhel cluster gfs_improveperformance
 
Linux High Availability Overview - openSUSE.Asia Summit 2015
Linux High Availability Overview - openSUSE.Asia Summit 2015 Linux High Availability Overview - openSUSE.Asia Summit 2015
Linux High Availability Overview - openSUSE.Asia Summit 2015
 
PostgreSQL + ZFS best practices
PostgreSQL + ZFS best practicesPostgreSQL + ZFS best practices
PostgreSQL + ZFS best practices
 
Glusterfs for sysadmins-justin_clift
Glusterfs for sysadmins-justin_cliftGlusterfs for sysadmins-justin_clift
Glusterfs for sysadmins-justin_clift
 
GlusterFS CTDB Integration
GlusterFS CTDB IntegrationGlusterFS CTDB Integration
GlusterFS CTDB Integration
 
First Responder Course - Session 10 - Static Evidence Collection [2004]
First Responder Course - Session 10 - Static Evidence Collection [2004]First Responder Course - Session 10 - Static Evidence Collection [2004]
First Responder Course - Session 10 - Static Evidence Collection [2004]
 
MySQL on ZFS
MySQL on ZFSMySQL on ZFS
MySQL on ZFS
 
OpenZFS data-driven performance
OpenZFS data-driven performanceOpenZFS data-driven performance
OpenZFS data-driven performance
 

Similar to Distributed replicated block device

brief introduction of drbd in SLE12SP2
brief introduction of drbd in SLE12SP2brief introduction of drbd in SLE12SP2
brief introduction of drbd in SLE12SP2
Nick Wang
 
7.pptx
7.pptx7.pptx
7.pptx
alaakaraja1
 
RAC - The Savior of DBA
RAC - The Savior of DBARAC - The Savior of DBA
RAC - The Savior of DBA
Nikhil Kumar
 
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBIT
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBITOpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBIT
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBIT
OpenNebula Project
 
Dell linux cluster sap
Dell linux cluster sapDell linux cluster sap
Dell linux cluster sap
Prakash Kolli
 
Storage Area Networks Unit 2 Notes
Storage Area Networks Unit 2 NotesStorage Area Networks Unit 2 Notes
Storage Area Networks Unit 2 Notes
Sudarshan Dhondaley
 
Hadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapaHadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapa
kapa rohit
 
Hadoop Interview Questions and Answers
Hadoop Interview Questions and AnswersHadoop Interview Questions and Answers
Hadoop Interview Questions and Answers
MindsMapped Consulting
 
OSC-Fall-Tokyo-2012-v9.pdf
OSC-Fall-Tokyo-2012-v9.pdfOSC-Fall-Tokyo-2012-v9.pdf
OSC-Fall-Tokyo-2012-v9.pdf
nitinscribd
 
Sector Sphere 2009
Sector Sphere 2009Sector Sphere 2009
Sector Sphere 2009
lilyco
 
sector-sphere
sector-spheresector-sphere
sector-spherexlight
 
FreeBSD Portscamp, Kuala Lumpur 2016
FreeBSD Portscamp, Kuala Lumpur 2016FreeBSD Portscamp, Kuala Lumpur 2016
FreeBSD Portscamp, Kuala Lumpur 2016
Muhammad Moinur Rahman
 
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-DeviceSUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE
 
Building Apache Cassandra clusters for massive scale
Building Apache Cassandra clusters for massive scaleBuilding Apache Cassandra clusters for massive scale
Building Apache Cassandra clusters for massive scale
Alex Thompson
 
HDFS.ppt
HDFS.pptHDFS.ppt
HDFS.ppt
ssuserec53e73
 
High Availability With DRBD & Heartbeat
High Availability With DRBD & HeartbeatHigh Availability With DRBD & Heartbeat
High Availability With DRBD & Heartbeat
Chris Barber
 
XPDS13: VIRTUAL DISK INTEGRITY IN REAL TIME JP BLAKE, ASSURED INFORMATION SE...
XPDS13: VIRTUAL DISK INTEGRITY IN REAL TIME  JP BLAKE, ASSURED INFORMATION SE...XPDS13: VIRTUAL DISK INTEGRITY IN REAL TIME  JP BLAKE, ASSURED INFORMATION SE...
XPDS13: VIRTUAL DISK INTEGRITY IN REAL TIME JP BLAKE, ASSURED INFORMATION SE...
The Linux Foundation
 
HDFS.ppt
HDFS.pptHDFS.ppt
HDFS.ppt
ssuserec53e73
 

Similar to Distributed replicated block device (20)

brief introduction of drbd in SLE12SP2
brief introduction of drbd in SLE12SP2brief introduction of drbd in SLE12SP2
brief introduction of drbd in SLE12SP2
 
7.pptx
7.pptx7.pptx
7.pptx
 
RAC - The Savior of DBA
RAC - The Savior of DBARAC - The Savior of DBA
RAC - The Savior of DBA
 
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBIT
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBITOpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBIT
OpenNebulaConf 2016 - The DRBD SDS for OpenNebula by Philipp Reisner, LINBIT
 
Dell linux cluster sap
Dell linux cluster sapDell linux cluster sap
Dell linux cluster sap
 
Storage Area Networks Unit 2 Notes
Storage Area Networks Unit 2 NotesStorage Area Networks Unit 2 Notes
Storage Area Networks Unit 2 Notes
 
Hadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapaHadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapa
 
Hadoop Interview Questions and Answers
Hadoop Interview Questions and AnswersHadoop Interview Questions and Answers
Hadoop Interview Questions and Answers
 
OSC-Fall-Tokyo-2012-v9.pdf
OSC-Fall-Tokyo-2012-v9.pdfOSC-Fall-Tokyo-2012-v9.pdf
OSC-Fall-Tokyo-2012-v9.pdf
 
Sector Sphere 2009
Sector Sphere 2009Sector Sphere 2009
Sector Sphere 2009
 
sector-sphere
sector-spheresector-sphere
sector-sphere
 
FreeBSD Portscamp, Kuala Lumpur 2016
FreeBSD Portscamp, Kuala Lumpur 2016FreeBSD Portscamp, Kuala Lumpur 2016
FreeBSD Portscamp, Kuala Lumpur 2016
 
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-DeviceSUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-Device
 
Building Apache Cassandra clusters for massive scale
Building Apache Cassandra clusters for massive scaleBuilding Apache Cassandra clusters for massive scale
Building Apache Cassandra clusters for massive scale
 
HDFS.ppt
HDFS.pptHDFS.ppt
HDFS.ppt
 
Arun
ArunArun
Arun
 
High Availability With DRBD & Heartbeat
High Availability With DRBD & HeartbeatHigh Availability With DRBD & Heartbeat
High Availability With DRBD & Heartbeat
 
XPDS13: VIRTUAL DISK INTEGRITY IN REAL TIME JP BLAKE, ASSURED INFORMATION SE...
XPDS13: VIRTUAL DISK INTEGRITY IN REAL TIME  JP BLAKE, ASSURED INFORMATION SE...XPDS13: VIRTUAL DISK INTEGRITY IN REAL TIME  JP BLAKE, ASSURED INFORMATION SE...
XPDS13: VIRTUAL DISK INTEGRITY IN REAL TIME JP BLAKE, ASSURED INFORMATION SE...
 
Unit 1
Unit 1Unit 1
Unit 1
 
HDFS.ppt
HDFS.pptHDFS.ppt
HDFS.ppt
 

More from Chanaka Lasantha

Storing, Managing, and Deploying Docker Container Images with Amazon ECR
Storing, Managing, and Deploying Docker Container Images with Amazon ECRStoring, Managing, and Deploying Docker Container Images with Amazon ECR
Storing, Managing, and Deploying Docker Container Images with Amazon ECR
Chanaka Lasantha
 
Building A Kubernetes App With Amazon EKS
Building A Kubernetes App With Amazon EKSBuilding A Kubernetes App With Amazon EKS
Building A Kubernetes App With Amazon EKS
Chanaka Lasantha
 
ERP System Implementation Kubernetes Cluster with Sticky Sessions
ERP System Implementation Kubernetes Cluster with Sticky Sessions ERP System Implementation Kubernetes Cluster with Sticky Sessions
ERP System Implementation Kubernetes Cluster with Sticky Sessions
Chanaka Lasantha
 
Free radius for wpa2 enterprise with active directory integration
Free radius for wpa2 enterprise with active directory integrationFree radius for wpa2 enterprise with active directory integration
Free radius for wpa2 enterprise with active directory integration
Chanaka Lasantha
 
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
Chanaka Lasantha
 
Complete squid & firewall configuration. plus easy mac binding
Complete squid & firewall configuration. plus easy mac bindingComplete squid & firewall configuration. plus easy mac binding
Complete squid & firewall configuration. plus easy mac binding
Chanaka Lasantha
 
Athenticated smaba server config with open vpn
Athenticated smaba server  config with open vpnAthenticated smaba server  config with open vpn
Athenticated smaba server config with open vpn
Chanaka Lasantha
 
Ask by linux kernel add or delete a hdd
Ask by linux kernel add or delete a hddAsk by linux kernel add or delete a hdd
Ask by linux kernel add or delete a hdd
Chanaka Lasantha
 
Free radius billing server with practical vpn exmaple
Free radius billing server with practical vpn exmapleFree radius billing server with practical vpn exmaple
Free radius billing server with practical vpn exmaple
Chanaka Lasantha
 
One key sheard site to site open vpn
One key sheard site to site open vpnOne key sheard site to site open vpn
One key sheard site to site open vpn
Chanaka Lasantha
 
Usrt to ethernet connectivity over the wolrd cubieboard bords
Usrt to ethernet connectivity over the wolrd cubieboard bordsUsrt to ethernet connectivity over the wolrd cubieboard bords
Usrt to ethernet connectivity over the wolrd cubieboard bords
Chanaka Lasantha
 
Site to-multi site open vpn solution with mysql db
Site to-multi site open vpn solution with mysql dbSite to-multi site open vpn solution with mysql db
Site to-multi site open vpn solution with mysql db
Chanaka Lasantha
 
Site to-multi site open vpn solution. with active directory auth
Site to-multi site open vpn solution. with active directory authSite to-multi site open vpn solution. with active directory auth
Site to-multi site open vpn solution. with active directory auth
Chanaka Lasantha
 
Site to-multi site open vpn solution-latest
Site to-multi site open vpn solution-latestSite to-multi site open vpn solution-latest
Site to-multi site open vpn solution-latest
Chanaka Lasantha
 
Install elasticsearch, logstash and kibana
Install elasticsearch, logstash and kibana Install elasticsearch, logstash and kibana
Install elasticsearch, logstash and kibana
Chanaka Lasantha
 
Oracle cluster installation with grid and nfs
Oracle cluster  installation with grid and nfsOracle cluster  installation with grid and nfs
Oracle cluster installation with grid and nfs
Chanaka Lasantha
 
Oracle cluster installation with grid and iscsi
Oracle cluster  installation with grid and iscsiOracle cluster  installation with grid and iscsi
Oracle cluster installation with grid and iscsi
Chanaka Lasantha
 
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
Chanaka Lasantha
 
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management Systemully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
Chanaka Lasantha
 
Docker framework
Docker frameworkDocker framework
Docker framework
Chanaka Lasantha
 

More from Chanaka Lasantha (20)

Storing, Managing, and Deploying Docker Container Images with Amazon ECR
Storing, Managing, and Deploying Docker Container Images with Amazon ECRStoring, Managing, and Deploying Docker Container Images with Amazon ECR
Storing, Managing, and Deploying Docker Container Images with Amazon ECR
 
Building A Kubernetes App With Amazon EKS
Building A Kubernetes App With Amazon EKSBuilding A Kubernetes App With Amazon EKS
Building A Kubernetes App With Amazon EKS
 
ERP System Implementation Kubernetes Cluster with Sticky Sessions
ERP System Implementation Kubernetes Cluster with Sticky Sessions ERP System Implementation Kubernetes Cluster with Sticky Sessions
ERP System Implementation Kubernetes Cluster with Sticky Sessions
 
Free radius for wpa2 enterprise with active directory integration
Free radius for wpa2 enterprise with active directory integrationFree radius for wpa2 enterprise with active directory integration
Free radius for wpa2 enterprise with active directory integration
 
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
 
Complete squid & firewall configuration. plus easy mac binding
Complete squid & firewall configuration. plus easy mac bindingComplete squid & firewall configuration. plus easy mac binding
Complete squid & firewall configuration. plus easy mac binding
 
Athenticated smaba server config with open vpn
Athenticated smaba server  config with open vpnAthenticated smaba server  config with open vpn
Athenticated smaba server config with open vpn
 
Ask by linux kernel add or delete a hdd
Ask by linux kernel add or delete a hddAsk by linux kernel add or delete a hdd
Ask by linux kernel add or delete a hdd
 
Free radius billing server with practical vpn exmaple
Free radius billing server with practical vpn exmapleFree radius billing server with practical vpn exmaple
Free radius billing server with practical vpn exmaple
 
One key sheard site to site open vpn
One key sheard site to site open vpnOne key sheard site to site open vpn
One key sheard site to site open vpn
 
Usrt to ethernet connectivity over the wolrd cubieboard bords
Usrt to ethernet connectivity over the wolrd cubieboard bordsUsrt to ethernet connectivity over the wolrd cubieboard bords
Usrt to ethernet connectivity over the wolrd cubieboard bords
 
Site to-multi site open vpn solution with mysql db
Site to-multi site open vpn solution with mysql dbSite to-multi site open vpn solution with mysql db
Site to-multi site open vpn solution with mysql db
 
Site to-multi site open vpn solution. with active directory auth
Site to-multi site open vpn solution. with active directory authSite to-multi site open vpn solution. with active directory auth
Site to-multi site open vpn solution. with active directory auth
 
Site to-multi site open vpn solution-latest
Site to-multi site open vpn solution-latestSite to-multi site open vpn solution-latest
Site to-multi site open vpn solution-latest
 
Install elasticsearch, logstash and kibana
Install elasticsearch, logstash and kibana Install elasticsearch, logstash and kibana
Install elasticsearch, logstash and kibana
 
Oracle cluster installation with grid and nfs
Oracle cluster  installation with grid and nfsOracle cluster  installation with grid and nfs
Oracle cluster installation with grid and nfs
 
Oracle cluster installation with grid and iscsi
Oracle cluster  installation with grid and iscsiOracle cluster  installation with grid and iscsi
Oracle cluster installation with grid and iscsi
 
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
 
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management Systemully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
 
Docker framework
Docker frameworkDocker framework
Docker framework
 

Recently uploaded

Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ralf Eggert
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
Fwdays
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 

Recently uploaded (20)

Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 

Distributed replicated block device

  • 1. DRBD Page 1 Distributed Replicated Block Device – DRBD chanaka.lasantha@gmial.com DATE: 17TH FEB 2014
  • 2. DRBD Page 2 DRBD refers to block devices designed as a building block to form high availability (HA) clusters - Distributed Replicated Block Device This is done by mirroring a whole block device via an assigned network. Distributed Replicated Block Device can be understood as network based raid-1. DRBD refers to block devices designed as a building block to form high availability (HA) clusters. This is done by mirroring a whole block device via an assigned network. DRBD can be understood as network based raid-1. In the illustration above, the two orange boxes represent two servers that form an HA cluster. The boxes contain the usual components of a Linux™ kernel: file system, buffer cache, disk scheduler, disk drivers, TCP/IP stack and network interface card (NIC) driver. The black arrows illustrate the flow of data between these components. The orange arrows show the flow of data, as DRBD mirrors the data of a highly available service from the active node of the HA cluster to the standby node of the HA cluster. The upper part of this picture shows a cluster where the left node is currently active, i.e., the service's IP address that the client machines are talking to is currently on the left node.
  • 3. DRBD Page 3 The service, including its IP address, can be migrated to the other node at any time, either due to a failure of the active node or as an administrative action. The lower part of the illustration shows a degraded cluster. In HA speak the migration of a service is called failover, the reverse process is called failback and when the migration is triggered by an administrator it is called switchover. What DRBD Does? Mirroring of important data DRBD works on top of block devices, i.e., hard disk partitions or LVM's logical volumes. It mirrors each data block that it is written to disk to the peer node. From fully synchronous Mirroring can be done tightly coupled (synchronous). That means that the file system on the active node is notified that the writing of the block was finished only when the block made it to both disks of the cluster. Synchronous mirroring (called protocol C in DRBD speak) is the right choice for HA clusters where you dare not lose a single transaction in case of the complete crash of the active (primary in DRBD speak) node. To asynchronous The other option is asynchronous mirroring. That means that the entity that issued the write requests is informed about completion as soon as the data is written to the local disk. Asynchronous mirroring is necessary to build mirrors over long distances, i.e., the interconnecting network's round trip time is higher than the write latency you can tolerate for your application. (Note: The amount of data the peer node may fall behind is limited by bandwidth-delay product and the TCP send buffer.) Data accessible only on the active node A consequence of mirroring data on block device level is that you can access your data (using a file system) only on the active node. This is not a shortcoming of DRBD but is caused by the nature of most file systems (ext3, XFS, JFS, ext4 ...). These file systems are designed for one computer accessing one disk, so they cannot cope with two computers accessing one (virtually) shared disk. In spite of this limitation, there are still a few ways to access the data on the second node:  Use DRBD on logical volumes and use LVM's capabilities to take snapshots on the standby node, and access the data via the snapshot.  DRBD's primary-primary mode with a shared disk files system (GFS, OCFS2). These systems are very sensitive to failures of the replication network.
  • 4. DRBD Page 4 What DRBD Does After an Outage? After a node outage After an outage of a node DRBD automatically resynchronizes the temporarily unavailable node to the latest version of the data, in the background, without interfering with the service running. Of course this also works if the role of the surviving node was changed while the peer was down. In case a complete power outage takes both nodes down, DRBD will detect which of the nodes was down longer, and will do the resynchronization in the right direction. After an outage of the replication network Restoring service after the temporary failure of the replication network is just a typical example of how the automatic recovery mechanism just described works. DRBD will reestablish the connection and do the necessary resynchronization automatically. After an outage of a storage subsystem DRBD can mask the failure of a disk on the active node, i.e., the service can continue to run there, without needing to failover the service. If the disk can be replaced without shutting down the machine, it can be reattached to DRBD. DRBD resynchronizes the data as needed to the replacement disk. After an outage of all network links DRBD supports you with various automatic and manual recovery options in the event of split brain. Split brain is a situation where, due to the temporary failure of all network links between cluster nodes, and possibly due to intervention by cluster management software or human error, both nodes switched to the primary role while disconnected. This is a potentially harmful state, as it implies that modifications to the data might have been made on either node, without having been replicated to the peer. Thus, it is likely in this situation that two diverging sets of data have been created that cannot be merged. Distributed Replicated Block Device is actually a network based RAID 1. You are configuring DRBD on your system if you:  Need to secure data on certain disk and are therefore mirroring your data to another machine via network.  Configuring High Availability cluster or service. REQUIREMENTS:  additional disk for synchronization on BOTH MACHINES (preferably same size)  network connectivity between machines  working DNS resolution (can fix with /etc/hosts file)  NTP synchronized times on both nodes
  • 5. DRBD Page 5 NTP synchronized times on both nodes(configure on both nodes) yum -y install ntp vim /etc/ntp.conf # line 19: add the network range you allow to receive requests restrict 10.0.0.0 mask 255.255.255.0 nomodify notrap # change servers for synchronization #server 0.rhel.pool.ntp.org #server 1.rhel.pool.ntp.org #server 2.rhel.pool.ntp.org server 0.asia.pool.ntp.org server 1.asia.pool.ntp.org server 2.asia.pool.ntp.org server 3.asia.pool.ntp.org /etc/rc.d/init.d/ntpd start chkconfig ntpd on ntpq -p 1. BOTH MACHINES: Install EPEL repository on your system. Date: date -s "9 AUG 2013 11:32:08" Import the public key: rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org To install ELRepo for RHEL-6, SL-6 or CentOS-6: rpm -Uvh http://www.elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm 2. BOTH MACHINES: Install Distributed Replicated Block Device utils and kmod packages from EPEL Choose the version you prefer – drbd83 or drbd84 – i’ve had problems with drbd84 on kernel 2.6.32-358.6.1.el6.i686). yum install -y kmod-drbd83 drbd83-utils 3. BOTH MACHINES: Insert drbd module manually or just reboot both machines. /sbin/modprobe drbd 4. BOTH MACHINES: Create the Distributed Replicated Block Device resource file (/etc/drbd.d/disk1.res) and transfer it to the other machine (these files need to be exactly the same on both machines!). vim /etc/drbd.d/disk1.res resource disk1 { startup { wfc-timeout 30; outdated-wfc-timeout 20; degr-wfc-timeout 30;
  • 6. DRBD Page 6 } net { cram-hmac-alg sha1; shared-secret sync_disk; } syncer { rate 100M; verify-alg sha1; } on node1.chanaka.net { device minor 1; disk /dev/sdb; address 192.168.1.100:7789; meta-disk internal; } on node2.chanaka.net { device minor 1; disk /dev/sdb; address 192.168.1.101:7789; meta-disk internal; } } 5. BOTH MACHINES: Make sure that DNS resolution is working as expected! To quickly fix DNS resolutions add IP addresses FQDN to /etc/hosts on both machines as follows: vim /etc/hosts 192.168.1.100 node1.chanaka.net 192.168.1.101 node2.chanaka.net 6. BOTH MACHINES: Make sure that both machines are using NTP for time synchronization! To quickly fix this add an entry to your /etc/crontab file as follows and choose your NTP sync server: vim /etc/crontab Or crontab -e 1 * * * * root ntpdate your.ntp.server 7. BOTH MACHINES: Initialize the DRBD Meta data storage: /sbin/drbdadm create-md disk1 8. BOTH MACHINES: Start the Distributed Replicated Block Device service on both nodes: /etc/init.d/drbd start 9. On the node you wish to make a PRIMARY node run drbdadm command: /sbin/drbdadm — –overwrite-data-of-peer primary disk1
  • 7. DRBD Page 7 10. Wait for the Distributed Replicated Block Device disk initial synchronization to complete (100%) and check to confirm you are on primary node: cat /proc/drbd 11. Create desired filesystem on Distributed Replicated Block Device device: /sbin/mkfs.ext4 /dev/drbd1 DRBD Installation Script #!/bin/sh # drbd83-install-v01.sh (30 May 2013) # GeekPeek.Net scripts - Configure and install drbd83 on CentOS 6.X script # INFO: This script was tested on CentOS 6.4 minimal installation. The script installs and configures # DRBD 83. It installs ELRepo and drbd83-utils and kmod-drbd83 packages. It inserts drbd # module and creates drbd resource configuration file. It creates drbd device and EXT4 filesystem on it. # It adds two new lines to /etc/hosts file and creates new file /etc/cron.hourly/ntpsync. # All of the actions are done on both of the DRBD nodes so SSH key is generated and transferred for # easier configuration! # CODE: echo "For this script to work as expected, you need to enable root SSH access on the second machine." echo "Is SSH root access enabled on the second machine? (y/n)" read rootssh case $rootssh in y) echo "Please enter the second machine IP address." read ipaddr2 echo "Generating SSH key - press Enter a couple of times..." /usr/bin/ssh-keygen echo "Copying SSH key to the second machine..." echo "Please enter root password for the second machine." /usr/bin/ssh-copy-id root@$ipaddr2
  • 8. DRBD Page 8 echo "Succesfully set up SSH with key authentication...continuing with package installation on both machines..." ;; n) echo "Root access must be enabled on the second machine...exiting!" exit 1 ;; esac /bin/rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm /usr/bin/ssh root@$ipaddr2 /bin/rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm /usr/bin/yum install -y kmod-drbd83 drbd83-utils ntpdate /usr/bin/ssh root@$ipaddr2 /usr/bin/yum install -y kmod-drbd83 drbd83-utils ntpdate /sbin/modprobe drbd /usr/bin/ssh root@$ipaddr2 /sbin/modprobe drbd echo "Creating DRBD resource config file - need some additional INFO." echo "..." echo "Which DRBD device is this on your machines - talking about /dev/drbd1, /dev/drbd2,... (example: 1)" read drbdnum echo "Enter FQDN of your current machine (example: foo1.geekpeek.net):" read fqdn1 echo "Enter current machine IP address (example: 192.168.1.100):" read ipaddr1 echo "Enter current machine disk intended for DRBD (example: /dev/sdb):" read disk1 echo "Enter FQDN of your second machine (example: foo2.geekpeek.net):" read fqdn2 echo "Enter second machine IP address (example: 192.168.1.101):" read ipaddr2 echo "Enter second machine disk intended for DRBD (example: /dev/sdb):"
  • 9. DRBD Page 9 read disk2 echo "Enter suitable NTP server for time synchronization (example: ntp1.arnes.si):" read ntpserver echo "Creating DRBD configuration file..." echo "resource disk$drbdnum" >> /etc/drbd.d/disk$drbdnum.res echo "{" >> /etc/drbd.d/disk$drbdnum.res echo " startup {" >> /etc/drbd.d/disk$drbdnum.res echo " wfc-timeout 30;" >> /etc/drbd.d/disk$drbdnum.res echo " outdated-wfc-timeout 20;" >> /etc/drbd.d/disk$drbdnum.res echo " degr-wfc-timeout 30;" >> /etc/drbd.d/disk$drbdnum.res echo " }" >> /etc/drbd.d/disk$drbdnum.res echo " net {" >> /etc/drbd.d/disk$drbdnum.res echo " cram-hmac-alg sha1;" >> /etc/drbd.d/disk$drbdnum.res echo " shared-secret "sync_disk";" >> /etc/drbd.d/disk$drbdnum.res echo " }" >> /etc/drbd.d/disk$drbdnum.res echo " syncer {" >> /etc/drbd.d/disk$drbdnum.res echo " rate 100M;" >> /etc/drbd.d/disk$drbdnum.res echo " verify-alg sha1;" >> /etc/drbd.d/disk$drbdnum.res echo " }" >> /etc/drbd.d/disk$drbdnum.res echo " on $fqdn1 {" >> /etc/drbd.d/disk$drbdnum.res echo " device minor $drbdnum;" >> /etc/drbd.d/disk$drbdnum.res echo " disk $disk1;" >> /etc/drbd.d/disk$drbdnum.res echo " address $ipaddr1:7789;" >> /etc/drbd.d/disk$drbdnum.res echo " meta-disk internal;" >> /etc/drbd.d/disk$drbdnum.res echo " }" >> /etc/drbd.d/disk$drbdnum.res echo " on $fqdn2 {" >> /etc/drbd.d/disk$drbdnum.res echo " device minor $drbdnum;" >> /etc/drbd.d/disk$drbdnum.res echo " disk $disk2;" >> /etc/drbd.d/disk$drbdnum.res
  • 10. DRBD Page 10 echo " address $ipaddr2:7789;" >> /etc/drbd.d/disk$drbdnum.res echo " meta-disk internal;" >> /etc/drbd.d/disk$drbdnum.res echo " }" >> /etc/drbd.d/disk$drbdnum.res echo "}" >> /etc/drbd.d/disk$drbdnum.res echo "DRBD configuration file created /etc/drbd.d/disk$drbdnum" echo "$ipaddr1 $fqdn1" >> /etc/hosts echo "$ipaddr2 $fqdn2" >> /etc/hosts /usr/bin/scp /etc/hosts root@$ipaddr2:/etc/ echo "ntpdate $ntpserver" >> /etc/cron.hourly/ntpsync /bin/chmod +x /etc/cron.hourly/ntpsync /usr/bin/scp /etc/cron.hourly/ntpsync root@$ipaddr2:/etc/cron.hourly/ /usr/bin/ssh root@$ipaddr2 echo "1 * * * * root ntpdate $ntpserver" >> /etc/crontab /usr/bin/ssh root@$ipaddr2 echo "$ipaddr1 $fqdn1" >> /etc/hosts /usr/bin/ssh root@$ipaddr2 echo "$ipaddr2 $fqdn2" >> /etc/hosts /usr/bin/scp /etc/drbd.d/disk$drbdnum.res root@$ipaddr2:/etc/drbd.d/ /sbin/drbdadm create-md disk$drbdnum /usr/bin/ssh root@$ipaddr2 /sbin/drbdadm create-md disk$drbdnum /usr/bin/ssh root@$ipaddr2 /etc/init.d/drbd start & /etc/init.d/drbd start /sbin/drbdadm -- --overwrite-data-of-peer primary disk$drbdnum /sbin/mkfs.ext4 /dev/drbd$drbdnum sleep 5 /bin/cat /proc/drbd echo "DRBD configuration completed! Please wait for the disk synchronization to complete..." echo "...then you can now mount your DRBD disk on primary node!"