2. DRBD
DRBD was originally conceived and developed in the late 1990s by Philipp Reisner and Lars
Ellenberg.
The developers' purpose was to create a Linux security standard that would offer optimal
reliability at minimum cost.
The software has continuously evolved since that time.
The current version of DRBD can be used with all common flavors of Linux for synchronous
replication of stored data between a passive system and an active system.
3. DRBD
Data can be read from and written to both systems simultaneously.
DRBD also supports resource-level fencing.
DRBD works in conjunction with a cluster management program known as Heartbeat and has
been bundled into some distributions of Linux.
4. DRBD
DRBD is a registered trademark of LINBIT Information Technologies GmBH, headquartered in
Vienna, Austria.
The software has been released under the GNU GPL (General Public License) and can be freely
distributed and modified within the limitations of the license agreement.
5. Avoid pricey storage area network
(SAN) devices and synchronize data
with DRBD, a powerful — and free —
software component available in
Linux.
6. DRBD
DRBD can be summarized as RAID 1 (mirroring) over the network.
That means you need two nodes and a network to connect them.
On both of these nodes, a storage device — typically a local hard disk — is
reserved for the DRBD device.
Once configured, DRBD takes care of synchronizing these devices in real time
over the network.
The difference between DRBD and other synchronization hardware and
software, such as Rsync, is that DRBD is doing its work at the block layer, which
makes it fit for almost every use.
7. DRBD Setups
There are two setups for DRBD: active/passive or active/active.
The active/passive setup closely resembles RAID
◦ Data is written to the active device and replicated to the passive device.
◦ Normally, the passive device doesn't do anything, but if failure occurs, it can be switched to become
the active device.
◦ The active/passive setup is very popular in two-node, high-availability (HA) clusters.
8. DRBD Setups
The alternative is to use DRBD in an active/active setup:
◦ There are still two storage devices involved, but both can be accessed simultaneously,
which means both nodes can operate and serve up data simultaneously.
◦ Therefore, the two nodes can both provide data, servicing more users with better
performance.
◦ This setup comes with several additional requirements, though.
◦ To use an active/active DRBD setup, you also need a cluster-aware file system, such as
Oracle Corp.’s OCFS2 or Red Hat Inc.’s Global File System.
◦ That is because only a cluster-aware file system can guarantee simultaneous writes are
properly synchronized over the network and that two nodes can’t write to the same file at
the same time.
9. Making the failover successful
DRBD has become very popular because it allows administrators to configure HA clusters
without the need of an expensive SAN.
Imagine the case of a Web server configured for HA: If the host that is currently running the Web
service goes down, another host in the cluster can take over.
In order to continue its work normally, while running on the other node, the Web server needs
access to the same documents it had while running on the original node.
To ensure your Web server always services the same files, you have to put them on a DRBD
device.
10. Theoretically, you don't need HA clustering software to run DRBD, but having a
cluster makes it easier to manage DRBD.
Without HA software, the administrator needs to make sure a new node is
assigned as the active node after a failure, which involves a manual operation.
When included in a cluster, the cluster software will take care of the failover
automatically, making sure that, after a brief interruption, the service can start
again on the other node.
Also, in an active/active setup, HA cluster software is typically used. This is
because, on top of the DRBD device, a cluster file system must synchronize
access to the device, and a cluster file system is managed by the HA cluster
stack.
11. Split Brain Problem
The most immediate concern for admins is the connection between DRBD and the HA cluster
stack.
If the HA stack fails to manage the DRBD device properly, you risk ending up in a split-brain
situation where both devices think they're in charge.
Fortunately, there is a good manual procedure to resolve issues like that.
13. /etc/drbd.d/resource0.res
resource resource0 {
on node01 {
device /dev/drbd1;
disk /dev/sdc;
address 192.168.83.151:7789;
meta-disk internal;
}
on node02 {
device /dev/drbd1;
disk /dev/sdc;
address 192.168.83.164:7789;
meta-disk internal;
}
}
14. Create MetaData: #drbdadm create-md resource0
Start the resource UP: #drbdadm up resource0
Set One of the resources as primary: # drbdadm primary --force resource0
Format and mount
(start and enable the drbd service)