Required packages
yum -y install mdadm parted
Software RAID SSDs
For each disk in the array, create a partition of 80% size (starting at sector 2048) to overprovision and align the drive:
Usage:
parted $DRIVE mklabel gpt parted $DRIVE mkpart $LABEL 2048s "80%"
Example:
parted /dev/sdc mklabel gpt parted /dev/sdc mkpart DRBD-MDADM 2048s "80%"
Now assemble the RAID array
Usage:
mdadm --create $MD_ARRAY --bitmap=internal --metadata=1.2 --level $RAID_LEVEL --raid-disks $N $PART1 $PART2 .. $PARTN
Example:
mdadm --create /dev/md3 --bitmap=internal --metadata=1.2 --level 10 --raid-disks 4 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
Make the array persistent (double check the file after running the command to make sure it is sane.)
mdadm --detail --scan >> /etc/mdadm.conf
Hardware RAID
Card settings TBD Create a partition of approriate size
Single disks
Partition the disk to avoid start of disk alignment issues
Usage:
parted $DRIVE mklabel gpt parted $DRIVE mkpart $LABEL 2048s -- -1
Example:
parted /dev/sdc mklabel gpt parted /dev/sdc mkpart DRBD 2048s -- -1
This is for all standalone interfaces. Go to the next section ("Network Teams") for any redundant connection needs.
Example ifcfg file
Usage:
DEVICE=$ETH_DEV ONBOOT=yes BOOTPROTO=static IPADDR=$IP_ADDR PREFIX=$CIDR_NETMASK
Example:
DEVICE=eth4 ONBOOT=yes BOOTPROTO=static IPADDR=192.168.10.1 PREFIX=24
Required packages
yum -y install teamd
Runner configs
LACP
TEAM_CONFIG='{"runner": {"name": "lacp", "tx_hash": "ipv4"}, "link_watch": {"name": "ethtool"}}'
Round-robin
TEAM_CONFIG='{"runner": {"name": "roundrobin"}}'
Active-backup
TEAM_CONFIG='{"runner": {"name": "activebackup"}, "link_watch": {"name": "ethtool"}}'
Example ifcfg file for team members
DEVICE=eth2 ONBOOT=yes HOTPLUG=no TEAM_MASTER=drbd_team
Example /etc/sysconfig/network-scripts/ifcfg-drbd_team
DEVICE=drbd_team
DEVICETYPE=Team
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.10.1
PREFIX=24
TEAM_CONFIG='{"runner": {"name": "roundrobin"}}'
The goal is to restrict public SSH access to office IP ranges
Create lwoffice and lwmonitoring zones
firewall-cmd --permanent --new-zone=lwoffice firewall-cmd --permanent --new-zone=lwmonitoring firewall-cmd --reload
Add SSH services to each zone
firewall-cmd --zone=lwoffice --add-service=ssh firewall-cmd --zone=lwmonitoring --add-service=ssh firewall-cmd --permanent --zone=lwoffice --add-service=ssh firewall-cmd --permanent --zone=lwmonitoring --add-service=ssh
Add the appropriate IP ranges to each zone
firewall-cmd --permanent --zone=lwoffice --add-source=10.10.4.0/23 firewall-cmd --permanent --zone=lwoffice --add-source=10.20.4.0/22 firewall-cmd --permanent --zone=lwoffice --add-source=10.20.7.0/24 firewall-cmd --permanent --zone=lwoffice --add-source=10.30.4.0/22 firewall-cmd --permanent --zone=lwoffice --add-source=10.30.2.0/24 firewall-cmd --permanent --zone=lwoffice --add-source=10.30.104.0/24 firewall-cmd --permanent --zone=lwoffice --add-source=10.50.9.0/27
firewall-cmd --permanent --zone=lwmonitoring --add-source=10.10.9.0/24 firewall-cmd --permanent --zone=lwmonitoring --add-source=10.20.9.0/24 firewall-cmd --permanent --zone=lwmonitoring --add-source=10.30.9.0/24 firewall-cmd --permanent --zone=lwmonitoring --add-source=10.40.11.0/28 firewall-cmd --permanent --zone=lwmonitoring --add-source=10.50.9.0/27
Reload the permanent rules to make them active
firewall-cmd --reload
Required packages and scripts
yum -y install pacemaker pcs
mkdir -p /usr/lib/ocf/resource.d/lw wget http://scripts.ent.liquidweb.com/pacemaker/LVM -O /usr/lib/ocf/resource.d/lw/LVM chmod 755 /usr/lib/ocf/resource.d/lw/LVM
Create a cluster zone for firewalld
firewall-cmd --permanent --new-zone=cluster firewall-cmd --reload
Add services to the cluster zone
firewall-cmd --zone=cluster --add-service=ssh firewall-cmd --zone=cluster --add-service=high-availability firewall-cmd --zone=cluster --add-port=7788/tcp firewall-cmd --permanent --zone=cluster --add-service=ssh firewall-cmd --permanent --zone=cluster --add-service=high-availability firewall-cmd --permanent --zone=cluster --add-port=7788/tcp
Add node IPs as sources to the cluster zone
firewall-cmd --zone=cluster --add-source=192.168.10.1 firewall-cmd --zone=cluster --add-source=192.168.10.2 firewall-cmd --permanent --zone=cluster --add-source=192.168.10.1 firewall-cmd --permanent --zone=cluster --add-source=192.168.10.2
Create SSH keys and allow them between hosts
ssh-keygen -t ecdsa -N '' -C 'cluster key' -f /root/.ssh/id_ecdsa
cat /root/.ssh/id_ecdsa.pub | ssh root@${node} "cat >> /root/.ssh/authorized_keys"
Add entries to /etc/hosts. Use the shortname for each host (eg. db01.domain.com => db01)
Warning: You must have all nodes in the host file for each server
192.168.10.1 db01 192.168.10.2 db02
Set the hacluster password, and note it for the next step.
passwd hacluster
Start and enable the pcsd service to allow for cluster creation
systemctl enable pcsd systemctl start pcsd
Authorize all nodes in the cluster. This will prompt for the hacluster user's password on each node.
Usage:
pcs cluster auth $node1 $node2 .. $nodeN
Example:
pcs cluster auth db01 db02
Define cluster membership and cluster name
Usage:
pcs cluster setup --name $cluster_name $node1 $node2 $nodeN --transport udpu
Example:
pcs cluster setup --name mycluster01 db01 db02 --transport udpu
Start and enable the cluster
pcs cluster start --all pcs cluster enable --all
Set the resource stickiness of all cluster managed resources (cost of resource migration)
pcs property set default-resource-stickiness=100
Disable stonith (fencing)
pcs property set stonith-enabled=false
Note on IPMI fencing
Nodes will need their IPMI interfaces attached to the Enterprise MES IPMI VLAN
Note Required packages
yum -y install fence-agents-ipmilan ipmitool curl
Configure IPMI network settings
Usage:
ipmitool lan set 1 ipsrc static ipmitool lan set 1 ipaddr $IPMI_ADDR ipmitool lan set 1 netmask $IPMI_NETMASK ipmitool lan set 1 defgw ipaddr $IPMI_GW ipmitool lan set 1 arp respond on
Example:
ipmitool lan set 1 ipsrc static ipmitool lan set 1 ipaddr 10.39.208.10 ipmitool lan set 1 netmask 255.255.254.0 ipmitool lan set 1 defgw ipaddr 10.39.208.1 ipmitool lan set 1 arp respond on
Create an operator level IPMI user for fencing
Usage:
ipmitool user set name 4 {{ ipmi.user }}
ipmitool user set password 4 {{ ipmi.password }}
ipmitool channel setaccess 1 4 privilege=3
ipmitool channel setaccess 1 4 link=on ipmi=on callin=on
Example:
ipmitool user set name 4 pacemaker ipmitool user set password 4 xaiThewah6ph ipmitool channel setaccess 1 4 privilege=3 ipmitool channel setaccess 1 4 link=on ipmi=on callin=on
Warning On each node's IPMI interface
Login to the web interface as the administrator.
Navigate to Configuration > IP Access Control then check Enable IP access control. Enable the IPMI firewall.
Add the following IP ranges as allowed IPs:
* 10.20.7.0/24 * 10.30.4.0/22 * 10.20.4.0/22
Also add the public IPv4 addresses of all nodes as allowed IPs
As the last rule, add 0.0.0.0/0 as a DROP.
You'll be adding all stonith devices from one node
Usage:
pcs stonith create ipmi_$hostname fence_ipmilan \ ipaddr=$ipmi_addr \ login=$ipmi_user \ passwd=$ipmi_password \ pcmk_host_list=$hostname \ action=reboot privlvl=operator
Example:
pcs stonith create ipmi_$hostname fence_ipmilan \ ipaddr=10.39.208.10 \ login=pacemaker \ passwd=yu0Ieng0in1a \ pcmk_host_list=db01 \ action=reboot privlvl=operator
Add location constraints, as an unresponsive node may not be able to fence itself
Usage:
pcs constraint location ipmi_$hostname avoids $hostname
Example:
pcs constraint location ipmi_db01 avoids db01
Enable stonith for the cluster
pcs property set stonith-enabled=true
7.0
For legacy CentOS 7 deployments, the elrepo no longer provides DRBD, as such you will need to manually install the packages from our internal servers:
Execute:
yum install http://files.ent.liquidweb.com/content/RPMs/drbd84-utils-9.12.2-1.el7.elrepo.x86_64.rpm http://files.ent.liquidweb.com/content/RPMs/kmod-drbd84-8.4.11-1.2.el7_8.elrepo.x86_64.rpm
Additional repos: ELRepo
Add the following to /etc/yum.repos.d/elrepo-bootstrap.repo
[ansible-bootstrap-elrepo] name = Ansible Bootstrap for ElRepo baseurl = http://elrepo.org/linux/elrepo/el7/x86_64/ enabled = 0 gpgcheck = 0
Install the actual repo-release package
yum -y --enablerepo=ansible-bootstrap-elrepo install elrepo-release
Warning: Required packages
yum -y install drbd84-utils kmod-drbd84
Disable the DRBD service
systemctl disable drbd
Place the following contents in /etc/drbd.d/global_common.conf
global {
usage-count no;
}
common {
startup {
wfc-timeout 0;
outdated-wfc-timeout 60;
degr-wfc-timeout 120;
}
disk {
on-io-error detach;
al-extents 6427;
resync-rate 50M;
c-plan-ahead 50;
c-min-rate 25M;
c-max-rate 200M;
c-fill-target 1M;
}
net {
protocol C;
verify-alg sha1;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
max-buffers 8192;
}
handlers {
split-brain "/usr/lib/drbd/notify-split-brain.sh devnull@sourcedns.com";
}
}
Warning: Which block device to use?
Be sure to partition any bare drives (single disks/hardware raid) and use the partition, due to LVM detection issues. For software raid, the bare raid device (/dev/md0, etc) should be used.
Add the following to /etc/drbd.d/shared.res:
Usage:
resource shared {
device $DRBD_DEV;
meta-disk internal;
net {
cram-hmac-alg sha1;
shared-secret "$DRBD_PASSWD";
}
on $NODE0_FQDN {
address ipv4 $DRBD_IP:7788;
disk $DRBD_BACKING_DEVICE;
}
on $NODE1_FQDN {
address ipv4 $DRBD_IP:7788;
disk $DRBD_BACKING_DEVICE;
}
}
Example:
resource shared {
device /dev/drbd0;
meta-disk internal;
net {
cram-hmac-alg sha1;
shared-secret "otoo7eit1ooT";
}
on storage0.enteng.es {
address ipv4 192.168.10.1:7788;
disk /dev/sdb1;
}
on storage1.enteng.es {
address ipv4 192.168.10.2:7788;
disk /dev/sdb1;
}
}
Initialize the DRBD volumes
Usage:
drbdadm create-md $DRBD_RESOURCE
Example:
drbdadm create-md shared
Bring the resource up
Usage:
drbdadm up $DRBD_RESOURCE
Example:
drbdadm up shared
Declare the disks to be consistent
Usage:
drbdadm -- --clear-bitmap new-current-uuid ${DRBD_RESOURCE}/0
Example:
drbdadm -- --clear-bitmap new-current-uuid shared/0
Set the following device filters in /etc/lvm/lvm.conf
filter = [ "a|/dev/drbd[0-9]+|", "r|.*|" ]
Disable lvmetad in /etc/lvm/lvm.conf by adding/modifying the following setting:
use_lvmetad = 0
Stop and disable the lvmetad service
systemctl stop lvm2-lvmetad systemctl disable lvm2-lvmetad
Create a mount point for symlinks
Usage:
mkdir -p /symlinks/$DRBD_NAME
Example:
mkdir -p /symlinks/shared
Make the DRBD resources primary
Usage:
drbdadm primary $DRBD_RESOURCE
Example:
drbdadm primary shared
Create the volume group
Usage:
vgcreate $VG_NAME $DRBD_DEVICE
Example:
vgcreate vg_shared /dev/drbd0
Create a small logical volume to hold configuration files
Usage:
lvcreate -L 1G -n symlinks $VG_NAME
Example:
lvcreate -L 1G -n symlinks vg_shared
Format the logical volume
Usage:
mkfs.ext4 /dev/$VG_NAME/$LV_NAME
Example:
mkfs.ext4 /dev/vg_shared/symlinks
Note: This is typically for SSDs only
Create an LVM thinpool
Usage:
lvcreate -l +95%FREE --poolmetadatasize 1G --type thin-pool --thinpool pool00 $VG_NAME
Example:
lvcreate -l +95%FREE --poolmetadatasize 1G --type thin-pool --thinpool pool00 vg_shared
Here we're defining our basic storage resources
Push the current cluster configuration to a file for modification. This is done so that proper constraints are in place before starting services
pcs cluster cib /tmp/mysql.cfg
Create the DRBD pacemaker resource
Usage:
pcs -f /tmp/mysql.cfg resource create $DRBD_RESOURCE_NAME ocf:linbit:drbd drbd_resource=$DRBD_RESOURCE op start interval=0 timeout=240 op stop interval=0 timeout=100 op monitor interval=30 role=Master op monitor interval=31 role=Slave
Example:
pcs -f /tmp/mysql.cfg resource create p_drbd_shared ocf:linbit:drbd drbd_resource=shared op start interval=0 timeout=240 op stop interval=0 timeout=100 op monitor interval=30 role=Master op monitor interval=31 role=Slave
Create DRBD master/slave clone pairing (to indicate to pacemaker that DRBD should run on both nodes in primary/secondary mode)
Usage:
pcs -f /tmp/mysql.cfg resource master $MS_DRBD_RESOURCE_NAME $DRBD_RESOURCE_NAME master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
Example:
pcs -f /tmp/mysql.cfg resource master ms_drbd_shared p_drbd_shared master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
Create the volume group pacemaker resource
Usage:
pcs -f /tmp/mysql.cfg resource create $VG_RESOURCE_NAME ocf:lw:LVM volgrpname=$VG_NAME op start start-delay=1s
Example:
pcs -f /tmp/mysql.cfg resource create p_vg_shared ocf:lw:LVM volgrpname=vg_shared op start start-delay=1s
Create the /symlinks mount point resource
Usage:
pcs -f /tmp/mysql.cfg resource create p_fs_shared_symlinks ocf:heartbeat:Filesystem device=/dev/$VG_NAME/symlinks directory=/symlinks/shared fstype=ext4 op start start-delay=1s
Example:
pcs -f /tmp/mysql.cfg resource create p_fs_shared_symlinks ocf:heartbeat:Filesystem device=/dev/vg_shared/symlinks directory=/symlinks/shared fstype=ext4 op start start-delay=1s
Create a pacemaker resource group, which will serialize the start and stop of the resources
Usage:
pcs -f /tmp/mysql.cfg resource group add $GROUP_NAME $VG_RESOURCE_NAME p_fs_shared_symlinks
Example:
pcs -f /tmp/mysql.cfg resource group add g_shared_storage p_storage p_fs_shared_symlinks
Force the DRBD resource to become primary before starting the storage group
Usage:
pcs -f /tmp/mysql.cfg constraint order promote ms_drbd_$MS_DRBD_RESOURCE then start $GROUP_NAME
Example:
pcs -f /tmp/mysql.cfg constraint order promote ms_drbd_shared then start g_shared_storage
Force the storage group to only run on the DRBD primary
Usage:
pcs -f /tmp/mysql.cfg constraint colocation add $GROUP_NAME with master $MS_DRBD_NAME
Example:
pcs -f /tmp/mysql.cfg constraint colocation add g_shared_storage with master ms_drbd_shared
Make the configuration live in the cluster
pcs cluster cib-push /tmp/mysql.cfg
Additional repos: EPEL and Percona
Install EPEL-release package
yum -y install epel-release
Add the following to /etc/yum.repos.d/percona-bootstrap.repo
[ansible-bootstrap-percona] name = Ansible Bootstrap for Percona baseurl = http://repo.percona.com/release/7/RPMS/noarch/ enabled = 0 gpgcheck = 0
Install the percona-release package from the bootstrap repo:
yum -y --enablerepo=ansible-bootstrap-percona install percona-release
Required packages
Be sure to install the right major release version (Percona-Server-client-55 vs Percona-Server-client-57)
yum -y install pwgen MySQL-python percona-toolkit percona-xtrabackup Percona-Server-client-56 Percona-Server-server-56
Create the firewalld services zone
firewall-cmd --permanent --new-zone=services firewall-cmd --reload
Add the mysql service to the services zone
firewall-cmd --zone=services --add-service=mysql firewall-cmd --permanent --zone=services --add-service=mysql
Add appropriate sources to the services zone. This may be individual IPs or IP ranges.
Usage:
firewall-cmd --permanent --zone=services --add-source=$mysql_range firewall-cmd --permanent --zone=services --add-source=$specific_client_ip
Example:
firewall-cmd --permanent --zone=services --add-source=192.168.0.0/24
Reload the permanent rules to make them active
firewall-cmd --reload
Create symlink directories and empty files for mysql
mkdir -p /symlinks/shared/etc/my.cnf.d mkdir -p /symlinks/shared/root touch /symlinks/shared/root/.my.cnf
Add the following in /symlinks/shared/etc/my.cnf
[mysqld] datadir = /var/lib/mysql socket = /var/lib/mysql/mysql.sock log-error = /var/log/mysqld.log pid-file = /var/run/mysqld/mysqld.pid !includedir /etc/my.cnf.d/ #See /etc/my.cnf.d/tuning.cnf for standard tuning options
Add the following in /symlinks/shared/etc/my.cnf.d/tuning.cnf, tuning as necessary
[mysqld] max_connections = 250 thread-cache-size = 100 max_allowed_packet = 16M query_cache_size = 0 query_cache_type = 0 tmp-table-size = 32M max-heap-table-size = 32M max-connect-errors = 1000000 sysdate-is-now = 1 innodb_log_file_size = 64M innodb_buffer_pool_size = 5120M default-storage-engine = InnoDB innodb-file-per-table = 1 key_buffer_size = 32M
Add config file/credentials symlinks to the pacemaker storage group
pcs resource create p_symlink_etc_my_cnf_d ocf:heartbeat:symlink target=/symlinks/shared/etc/my.cnf.d link=/etc/my.cnf.d backup_suffix=.active --group g_shared_storage --after p_fs_shared_symlinks pcs resource create p_symlink_etc_my_cnf ocf:heartbeat:symlink target=/symlinks/shared/etc/my.cnf link=/etc/my.cnf backup_suffix=.active --group g_shared_storage --after p_fs_shared_symlinks pcs resource create p_symlink_root_my_cnf ocf:heartbeat:symlink target=/symlinks/shared/root/.my.cnf link=/root/.my.cnf backup_suffix=.active --group g_shared_storage --after p_fs_shared_symlinks
Stop mysql if it's running locally.
service mysql stop
Clear out the local mysql data from package installation
rm -rf /var/lib/mysql/*
If Cent7 or AlmaLinux disable mysqld
systemctl disable mysqld
Create a datadir LV of about 95% size
#Size of thinpool lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert dummy vg_shared Vwi-a-tz-- 4.00m pool00 0.00 pool00 vg_shared twi-aotz-- 5.59g 6.27 1.59 symlinks vg_shared -wi-ao---- 1.00g
lvcreate -T vg_shared/pool00 -V 5500m -n datadir
Format the new LV
mkfs.ext4 /dev/vg_shared/datadir
Add the datadir mount to the cluster
pcs resource create p_fs_shared_datadir ocf:heartbeat:Filesystem device=/dev/vg_shared/datadir directory=/var/lib/mysql fstype=ext4 options='noatime' op start start-delay=1s --group g_shared_storage --before p_fs_shared_symlinks
Wait a moment for the cluster to mount the LV, then remove the lost+found directory
rm -rf /var/lib/mysql/lost+found
MySQL 5.6 and earlier only
Initialize the datadir on DRBD
mysql_install_db --user mysql
Start mysql
service mysql start
MySQL 5.7 and later
Get the temporary root password from /var/log/mysqld.log Login via mysql -p, then provide the temporary root password when prompted Run the following query to set a new root password
set password for 'root'@'localhost' = PASSWORD('mypass');
MySQL 5.6 and earlier
Run mysql_secure_install, removing anonymous users/test databases, and setting a root password
Edit /root/.my.cnf
[client] user=root password=$password
Stop mysql
service mysql stop
Add the mysql service to the cluster
pcs resource create p_mysqld systemd:mysqld --group g_shared_storage
This command changes just a bit depending on which database is installed. MySQL < 5.7 it's mysql, MySQL >= 5.7 it's mysqld (as above) and for MariaDB it's mariadb.
Add any VIPs to the cluster
Usage:
pcs resource create p_vip_$VIP ocf:heartbeat:IPaddr2 ip=$VIP --group g_shared_storage --before p_mysqld
Example:
pcs resource create p_vip_192.168.0.100 ocf:heartbeat:IPaddr2 ip=192.168.0.100 --group g_shared_storage --before p_mysqld pcs resource create p_vip_10.30.42.42 ocf:heartbeat:IPaddr2 ip=10.30.42.42 --group g_shared_storage --before p_mysqld
Additional repos: EPEL
Install the EPEL release package
yum -y install epel-release
Required packages
yum -y install phpMyAdmin httpd php mod_ssl openssl
Create the firewalld services-web zone
firewall-cmd --permanent --new-zone=services-web firewall-cmd --reload
Add http/https services to the services-web zone
firewall-cmd --zone=services-web --add-service=http firewall-cmd --zone=services-web --add-service=https firewall-cmd --permanent --zone=services-web --add-service=http firewall-cmd --permanent --zone=services-web --add-service=https
Add the customer's VPN range to the services-web zone
Usage:
firewall-cmd --permanent --zone=services-web --add-source=$customer_vip
Example:
firewall-cmd --permanent --zone=services-web --add-source=172.20.138.0/24
Stop httpd
systemctl stop httpd
Disable httpd from starting on boot
systemctl disable httpd
Generate new self-signed SSL certificate
openssl req -x509 -nodes -subj '/CN=localhost/' -days 3650 -newkey rsa:4096 -sha256 -keyout /etc/pki/tls/private/localhost.key -out /etc/pki/tls/certs/localhost.crt
Create a directory to symlink the phpMyAdmin.conf file from the DRBD device
Usage:
mkdir -p /symlinks/$DRBD_NAME/etc/httpd/conf.d
Example:
mkdir -p /symlinks/shared/etc/httpd/conf.d
Add the following to /symlinks/shared/etc/httpd/conf.d/phpMyAdmin.conf, adding the customer's VPN range
#
# Allows only localhost by default
#
# But allowing phpMyAdmin to anyone other than localhost should be considered
# dangerous unless properly secured by SSL
Alias /phpMyAdmin /usr/share/phpMyAdmin
Alias /phpmyadmin /usr/share/phpMyAdmin
<Directory /usr/share/phpMyAdmin/>
AddDefaultCharset UTF-8
RewriteEngine on
RewriteCond %{HTTPS} !=on
RewriteRule .* https://%{SERVER_NAME}/phpMyAdmin [R,L]
<IfModule mod_authz_core.c>
# Apache 2.4
<RequireAny>
Require host liquidweb.com
# Add customer VPN range here
Require ip 127.0.0.1
Require ip ::1
</RequireAny>
</IfModule>
</Directory>
<Directory /usr/share/phpMyAdmin/setup/>
<IfModule mod_authz_core.c>
# Apache 2.4
<RequireAny>
Require host liquidweb.com
Require ip 127.0.0.1
Require ip ::1
</RequireAny>
</IfModule>
</Directory>
# These directories do not require access over HTTP - taken from the original
# phpMyAdmin upstream tarball
#
<Directory /usr/share/phpMyAdmin/libraries/>
Order Deny,Allow
Deny from All
Allow from None
</Directory>
<Directory /usr/share/phpMyAdmin/setup/lib/>
Order Deny,Allow
Deny from All
Allow from None
</Directory>
<Directory /usr/share/phpMyAdmin/setup/frames/>
Order Deny,Allow
Deny from All
Allow from None
</Directory>
Add the phpMyAdmin.conf symlink resource
pcs resource create p_symlink_etc_httpd_conf.d_phpMyAdmin.conf ocf:heartbeat:symlink target=/symlinks/shared/etc/httpd/conf.d/phpMyAdmin.conf link=/etc/httpd/conf.d/phpMyAdmin.conf backup_suffix=.active --group g_shared_storage
add httpd resource to the storage group
Usage:
pcs resource create p_httpd systemd:httpd --group g_$DRBD_NAME_storage firewall-cmd --permanent --zone=services-web --add-source=$customer_vip
Example:
pcs resource create p_httpd systemd:httpd --group g_shared_storage
Precheck
Warning: Check that the lp-UID file exists
Check that the /usr/local/lp/etc/lp-UID file exists and is valid for this server. This should match the UID of the subaccount in billing.
If this is not correct our internal scripts will fail to install or run correctly.
Install the MOTDSet script
[[ -e /usr/local/mes/app/bin/mesapp ]] || curl -s https://assets.ent.liquidweb.com/mesinstaller | perl - MESApp
Enable the MOTDSet feature:
mesapp feature install MOTDSet
Install the HAMySQL group:
Note: If everything above succeeded you should see an updated MOTD message printed to the screen.
Install the lwbash wrapper
Install the lwbash wrapper:
mkdir -p /opt/lwbash wget https://scripts.ent.liquidweb.com/lwbash/lwbash.sh -O /opt/lwbash/lwbash.sh chmod 700 /opt/lwbash/lwbash.sh
Add the hamysql parts:
/opt/lwbash/lwbash.sh --add hamysql
Install the MESAgent
You can install the MESAgent daemon if it is not already installed with this command:
[[ -e /usr/local/mes/bin/mesagent ]] || curl -s https://assets.ent.liquidweb.com/mesinstaller | perl - MESAgent
Enable the init script for the MESAgent:
ln -s /usr/local/mes/init_scripts/sysv/mesagent /etc/init.d/mesagent chkconfig --add mesagent chkconfig mesagent on
Start the MESAgent for the first time:
/etc/init.d/mesagent start
Enable the systemd service file for the MESAgent:
systemctl enable /usr/local/mes/init_scripts/systemd/mesagent.service
Start the MESAgent for the first time:
systemctl start mesagent
Once started, the MESAgent will register with the MESController and will start monitoring load and disk usage.
If you load the server in the controller, it should now have an Agent Control panel if the MESAgent is properly registered.
Enable MES Monitoring
Once the MESAgent is installed and working on all nodes in the cluster you will be able to enable MES monitoring.
To do this load the MES subaccount in the controller, make sure all servers in this cluster are assigned as members and click the Enable MES Monitoring button.
Failure to do this step will result in no notifications being generated when there is a failure.