Pacemaker Cluster Installation
This post shows how to perform a basic Pacemaker Cluster Installation on two RedHat Enterprise Linux 8.10 VMs (VirtualBox) with sbd fencing.
Table of Contents
Preparation
We will install a 2 node cluster, so we need two running VMs. The installation is described here. The hostnames in this example are lin1 and lin2. For the storage based death (sbd) fencing we need a raw disk that is accessible from both nodes. With both nodes running we setup this disk device and the cluster interconnect as follows:
rem run this on the Windows host of the VMs
set "NODE1="
set "NODE2="
(set /p NODE1=Enter VM name of cluster node 1 ^(e.g. lin1^):
set /p NODE2=Enter VM name of cluster node 2 ^(e.g. lin2^):
rem create and and attach a 50MB raw disk for use as a stonith device
call vboxmanage createmedium disk --filename c:\vms\%NODE1%\%NODE1%_stonith.vdi --sizebyte=52428800 --variant Fixed
call vboxmanage storageattach %NODE1% --storagectl "SATA" --port 1 --device 0 --type hdd --medium c:\vms\%NODE1%\%NODE1%_stonith.vdi --mtype=shareable
call vboxmanage storageattach %NODE2% --storagectl "SATA" --port 1 --device 0 --type hdd --medium c:\vms\%NODE1%\%NODE1%_stonith.vdi --mtype=shareable
rem add interconnect network adapters
call vboxmanage controlvm %NODE1% shutdown
call vboxmanage controlvm %NODE2% shutdown
timeout /T 10 /NOBREAK
call vboxmanage modifyvm %NODE1% --nic2=intnet --nic-type2=82540EM --cable-connected2=on --intnet2=interconnect
call vboxmanage modifyvm %NODE2% --nic2=intnet --nic-type2=82540EM --cable-connected2=on --intnet2=interconnect
call vboxmanage startvm %NODE1%
call vboxmanage startvm %NODE2%)
Installation
Run these commands as root on node1 (lin1):
cat << EOF > /etc/sysconfig/network-scripts/ifcfg-enp0s8
DEVICE=enp0s8
BOOTPROTO=none
ONBOOT=yes
PREFIX=16
IPADDR=192.168.0.1
EOF
systemctl restart NetworkManager
hostname|awk -F. {'print $1'} > /etc/hostname ; hostname -F /etc/hostname
subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
dnf -y install pcs pacemaker fence-agents-sbd
echo 'hacluster:changeme'|chpasswd
systemctl start pcsd.service
systemctl enable pcsd.service
and these commands as root on node2 (lin2):
cat << EOF > /etc/sysconfig/network-scripts/ifcfg-enp0s8
DEVICE=enp0s8
BOOTPROTO=none
ONBOOT=yes
PREFIX=16
IPADDR=192.168.0.2
EOF
systemctl restart NetworkManager
hostname|awk -F. {'print $1'} > /etc/hostname ; hostname -F /etc/hostname
subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
dnf -y install pcs pacemaker fence-agents-sbd
echo 'hacluster:changeme'|chpasswd
systemctl start pcsd.service
systemctl enable pcsd.service
Next we perform the basic cluster setup on one node:
echo changeme|pcs host auth lin1 lin2 -u hacluster
pcs cluster setup clu --start lin1 lin2
pcs cluster enable --all
# get the cluster status
sleep 20
pcs cluster status
# stop the cluster on all nodes
pcs cluster stop --all
The next step is to configure SBD (storage based death) fencing. Run these commands on both nodes of the cluster as root:
# load softdog (software watchdog). Creates /dev/watchdog which is needed
# since we are in a VM we don't have a hardware watchdog, so that's our best choice
cat << EOF > /etc/sysconfig/modules/softdog.modules
#!/bin/sh
[ -e /dev/watchdog ] || modprobe softdog
EOF
chmod 755 /etc/sysconfig/modules/softdog.modules
modprobe softdog
# configure sbd
systemctl enable sbd
sed -i "/^#SBD_DEVICE/s/^#SBD_DEVICE=\"\"/SBD_DEVICE=\"\/dev\/sdb\"/" /etc/sysconfig/sbd
The final step in Pacemaker Cluster Installation is to run these commands on only one node as root:
# initialize th sbd device
sbd -d /dev/sdb create
# list infos from nodes
sbd -d /dev/sdb list
# dump sbd meta data header
sbd -d /dev/sdb dump
pcs cluster start --all
# send test message to the other node (lin2)
sbd -d /dev/sdb message lin2.fritz.box test
# on the other node (lin2) the test message will be shown in the logfile /var/log/messages
# on lin2: tail /var/log/messages
# create fencing config
pcs stonith create sbd_fencing fence_sbd devices=/dev/sdb
# display stonith fencing config
sleep 25 ; pcs stonith
This concludes the Pacemaker Cluster Installation. The cluster is now operational.
Useful commands
This is a selection of some useful commands for the administration of a pacemaker cluster.
# get the status of the cluster
pcs status / pcs status --full
watch -n 1 pcs status --full # refresh every second
crm_mon -A
# remove resource constraints
pcs resource clear postgresql-clone
# put a node in standby mode
pcs node standby lin2
# remove a node from standby mode
pcs node unstandby lin2
# stonith / sbd status
pcs stonith sbd status --full
# check the cluster config
crm_verify -L
Output of some commands:
[root@lin1 ~]# pcs status --full
Cluster name: clu
Cluster Summary:
* Stack: corosync (Pacemaker is running)
* Current DC: lin2 (2) (version 2.1.7-5.2.el8_10-0f7f88312) - partition with quorum
* Last updated: Fri Feb 7 14:23:33 2025 on lin1
* Last change: Fri Feb 7 12:33:37 2025 by hacluster via hacluster on lin1
* 2 nodes configured
* 1 resource instance configured
Node List:
* Node lin1 (1): online, feature set 3.19.0
* Node lin2 (2): online, feature set 3.19.0
Full List of Resources:
* sbd_fencing (stonith:fence_sbd): Started lin1
Migration Summary:
Tickets:
PCSD Status:
lin1: Online
lin2: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
sbd: active/enabled
[root@lin1 ~]# pcs stonith sbd status --full
SBD STATUS
<node name>: <installed> | <enabled> | <running>
lin1: YES | YES | YES
lin2: YES | YES | YES
Messages list on device '/dev/sdb':
0 lin2.fritz.box clear
1 lin1.fritz.box clear
SBD header on device '/dev/sdb':
==Dumping header on disk /dev/sdb
Header version : 2.1
UUID : d2d4cb48-dcfe-4f2a-b607-3bb375c21926
Number of slots : 255
Sector size : 512
Timeout (watchdog) : 5
Timeout (allocate) : 2
Timeout (loop) : 1
Timeout (msgwait) : 10
==Header on disk /dev/sdb is dumped
[root@lin1 ~]# crm_verify -L
Support for 'score' in rsc_order is deprecated and will be removed in a future release (use 'kind' instead)
crm_verify: Warnings found during check: config may not be valid
-V may provide more details
[root@lin1 ~]#
The pcsd Web UI can be used as a web interface to monitor and administrate a Pacemaker cluster. After login with hacluster / changemaker click on Add Existing and enter one of the existing nodes. Take a look here of how it looks.
Further info
- Creating a Red Hat High-Availability cluster with Pacemaker
- HA projects clusterlabs.org (e.g. Pacemaker, Corosync, fence-agents, resource-agents)
If you want to remove the SBD fencing disk you can do that with:
rem run this on the Windows host of the VMs
set "NODE1="
set "NODE2="
(set /p NODE1=Enter VM name of cluster node 1 ^(e.g. lin1^):
set /p NODE2=Enter VM name of cluster node 2 ^(e.g. lin2^):
rem detach medium
call vboxmanage storageattach %NODE1% --storagectl "SATA" --port 1 --device 0 --type hdd --medium none
call vboxmanage storageattach %NODE2% --storagectl "SATA" --port 1 --device 0 --type hdd --medium none
call vboxmanage closemedium disk c:\vms\%NODE1%\%NODE1%_stonith.vdi --delete)
Leave a Reply