In many OS clustering technology, Shared Storage is required to allow data to be readable between the clustered nodes.
MySQL HA with DRBD is a leading solution to offer
- - An end-to-end, integrated stack of mature and proven open source technologies, fully supported by Oracle;
- Automatic failover and recovery for service continuity;
- Mirroring, via synchronous replication, to ensure failover between nodes without the risk of losing committed transactions;
- Building of HA clusters from commodity hardware, without the requirement for shared-storage.
To ensure continuity of the database, it is not only the DRBD to provide the storage replication in block device replication. Virtual IP, Start/Stop, mount in clustering technology are required. This blog is created with reference to the DRBD deployment guide on -
http://www.mysql.com/why-mysql/white-papers/mysql-high-availability-drbd-configuration-deployment-guide/
The major difference is to use "pcs" instead of "crm" command. This blog only provides the details of using "pcs".
Based on the information from the crm shell FAQ -
http://www.nongnu.org/crmsh/faq.html
Once the DRBD is configured correctly.
a. Step1: Check Correct Kernel is installed
yum update
b. Step2: Ensure DRBD tools are installed
yum install drbd84-utils
(The version of drbd84-utils is installed instead of drbd83-utils. Just to be sure which version of drbd module is loaded and install the correct version of drbd util tools.)
c. Step3: Ensure Cluster is running
yum install corosync
yum install pacemaker
e. Step4: Configure DRBD and create file system on the 2 hosts (please reference the MySQL DRBD HA Document)
f. Step5: Install and Configure MySQL Server (using RPM)
Once MySQL Server is installed and the MySQL DB is configured.
> mkdir /mysql
> mkdir /mysql/drbd (This is the folder to be mounted using DRBD device)
> mkdir /mysql/var
> chown -R mysql:mysql /mysql
Create the my.cnf under /mysql/drbd
[mysqld]
datadir = /mysql/drbd/data
port = 3306
server_id = 101
socket = /mysql/var/mysql01.sock
pid = /mysql/var/mysql01.pid
The Cluster Resource can be configured using "pcs"
Note : Your OS may not be able to find crm shell. The pcs can be installed via the yum repository.
- yum install pcs
Create the shell script (/root/mypcs.sh) with chmod +x mypcs.sh)
The highlighted configuration values should be changed according to your environment. The VIP is configured on eth0:1 using 192.168.56.51.
#!/bin/bash
# Setup Cluster - setup_cluster.sh
set -x
pcs cluster cib mysql_cfg
pcs -f mysql_cfg property set no-quorum-policy="ignore"
pcs -f mysql_cfg property set stonith-enabled="false"
pcs -f mysql_cfg resource defaults resource-stickiness="INFINITY"
pcs -f mysql_cfg resource defaults migration-threshold="2"
pcs -f mysql_cfg resource defaults failure-timeout=60s
pcs -f mysql_cfg resource create p_ip_mysql IPaddr2 \
ip="192.168.56.51" \
nic="eth0:1" \
cidr_netmask="24"
pcs -f mysql_cfg resource create p_drbd_mysql drbd \
drbd_resource="clusterdb_res" op monitor interval="15s"
pcs -f mysql_cfg resource master msMySQL p_drbd_mysql \
master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
pcs -f mysql_cfg resource create p_fs_mysql Filesystem \
device="/dev/drbd0" directory="/mysql/drbd" fstype="ext4"
pcs -f mysql_cfg resource create p_mysql mysql \
binary="/usr/sbin/mysqld" config="/mysql/drbd/my.cnf" \
datadir="/mysql/drbd/data" pid="/mysql/var/mysql.pid" \
socket="/mysql/var/mysql01.sock" user="mysql" group="mysql" \
additional_parameters="--user=mysql" \
op start timeout=120s \
op stop timeout=120s \
op monitor interval=20s timeout=30s
pcs -f mysql_cfg resource group add g_mysql p_ip_mysql p_fs_mysql p_mysql
pcs -f mysql_cfg constraint colocation add g_mysql with Master msMySQL
pcs -f mysql_cfg constraint order promote msMySQL then start g_mysql symmetrical=false score=INFINITY
pcs -f mysql_cfg constraint order demote msMySQL then stop g_mysql symmetrical=false score=0
pcs cluster cib-push mysql_cfg
# rm -f mysql_cfg
Once the service is running, as root on the Master host - execute
> crm_mon -Arn
To initiate a migration test, kill the mysqld process on the master host - (assuming only 1 mysqld running)
> killall -9 mysqld
The failed resource will be detected. However, it is re-spawned after a while. With the configuration <migration-threshold="2">, kill the process one more time, the service will be migrated to another node. (The p_mysql was running on virtual-101 and is now switched to host "virtual-102" together with the group of services filesystem and virtual ip).
# Setup Cluster - setup_cluster.sh
set -x
pcs cluster cib mysql_cfg
pcs -f mysql_cfg property set no-quorum-policy="ignore"
pcs -f mysql_cfg property set stonith-enabled="false"
pcs -f mysql_cfg resource defaults resource-stickiness="INFINITY"
pcs -f mysql_cfg resource defaults migration-threshold="2"
pcs -f mysql_cfg resource defaults failure-timeout=60s
pcs -f mysql_cfg resource create p_ip_mysql IPaddr2 \
ip="192.168.56.51" \
nic="eth0:1" \
cidr_netmask="24"
pcs -f mysql_cfg resource create p_drbd_mysql drbd \
drbd_resource="clusterdb_res" op monitor interval="15s"
pcs -f mysql_cfg resource master msMySQL p_drbd_mysql \
master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
pcs -f mysql_cfg resource create p_fs_mysql Filesystem \
device="/dev/drbd0" directory="/mysql/drbd" fstype="ext4"
pcs -f mysql_cfg resource create p_mysql mysql \
binary="/usr/sbin/mysqld" config="/mysql/drbd/my.cnf" \
datadir="/mysql/drbd/data" pid="/mysql/var/mysql.pid" \
socket="/mysql/var/mysql01.sock" user="mysql" group="mysql" \
additional_parameters="--user=mysql" \
op start timeout=120s \
op stop timeout=120s \
op monitor interval=20s timeout=30s
pcs -f mysql_cfg resource group add g_mysql p_ip_mysql p_fs_mysql p_mysql
pcs -f mysql_cfg constraint colocation add g_mysql with Master msMySQL
pcs -f mysql_cfg constraint order promote msMySQL then start g_mysql symmetrical=false score=INFINITY
pcs -f mysql_cfg constraint order demote msMySQL then stop g_mysql symmetrical=false score=0
pcs cluster cib-push mysql_cfg
# rm -f mysql_cfg
Once the service is running, as root on the Master host - execute
> crm_mon -Arn
To initiate a migration test, kill the mysqld process on the master host - (assuming only 1 mysqld running)
> killall -9 mysqld
The failed resource will be detected. However, it is re-spawned after a while. With the configuration <migration-threshold="2">, kill the process one more time, the service will be migrated to another node. (The p_mysql was running on virtual-101 and is now switched to host "virtual-102" together with the group of services filesystem and virtual ip).