First published on TECHNET on Feb 19, 2016
Author: Dexuan Cui
When Linux is running on physical hardware, multiple computers may be configured in a Linux operating system cluster to provide high availability and load balancing in case of a hardware failure. Different clustering packages are available for different Linux distros, but for Red Hat Enterprise Linux (RHEL) and CentOS,
Red Hat Cluster Suite
is a popular choice to achieve these goals. A cluster consists of two or more nodes, where each node is an instance of RHEL or CentOS. Such a cluster usually requires some kind of shared storage, such as iSCSI or fibre channel, that is accessible from all of the nodes.
What happens when Linux is running in a virtual machine guest on a hypervisor, such as you might be using in your on-premises datacenter? It may still make sense to use a Linux OS cluster for high availability and load balancing. But how can you create shared storage in such an environment so that it is accessible to all of the Linux guests that will participate in the cluster? This series of blog posts answers these questions.
This series of blog posts walks through setting up Microsoft’s Hyper-V to create shared storage that can be used by Linux clustering software. Then it walks through setting up Red Hat Cluster Suite in that environment to create a five-node Linux OS cluster. Finally, it demonstrates an example application running in the cluster environment, and how a failover works.
The shared storage is created using Hyper-V’s
allows the VM users to create a VHDX file, and share that file among the guest cluster nodes as if the shared VHDX file were a shared Serial Attached SCSI disk
. When the Shared VHDX feature is used, the .vhdx file itself still must reside in a location where it is accessible to all the nodes of a cluster. This means it must reside in a CSV (Cluster Shared Volume) partition or in an SMB 3.0 file share. For the example in this blog post series, we’ll use a host CSV partition, which requires a host cluster with an iSCSI target (server).
Note: To understand how clustering works, we need to first understand 3 important concepts in clustering:
split-brain, quorum and fencing
- “Split-brain” is the idea that a cluster can have communication failures, which can cause it to split into subclusters
- “Fencing” is the way of ensuring that one can safely proceed in these cases
- “Quorum” is the idea of determining which subcluster can fence the others and proceed to recover the cluster services
These three concepts will be referenced in the remainder of this blog post series.
The walk-through will be in three blog posts:
- Set up a Hyper-V host cluster and prepare for shared VHDX. Then set up five CentOS 6.7 VMs in the host cluster that use the shared VHDX. These five CentOS VMs will form the Linux OS cluster.
- Set up a Linux OS cluster with the CentOS 6.7 VMs running RHCS and the
GFS2 file system
- Set up a web server on one of the CentOS 6.7 nodes, and demonstrate various failover cases. Then Summary and conclusions.
Let’s get started!
Here we first setup an iSCSI target (server) on iscsi01 and then set up a 2-node Hyper-V host cluster on hyperv01 and hyperv02. Both nodes of the Hyper-V host cluster are running Windows Server 2012 R2 Hyper-V, with access to the iSCSI shared storage. The resulting configuration looks like this:
- Setup an iSCSI target on iscsi01. (Refer to
Installing and Configuring target iSCSI server on Windows Server 2012
.) We don’t have to buy a real iSCSI hardware. Windows Server 2012 R2 can emulate an iSCSI target based on .vhdx files.
- On hyperv01 and hyperv02, use “iSCSI Initiator” to connect to the 2 LUNs of iscsi01. Now in “Disk Management” of both the hosts, 2 new disks should appear and one’s size is 200GB and the other’s size is 1GB.
- On hyperv01 and hyperv02, install “Failover Cluster Manager”
- On hyperv02, with Failover Cluster Manager -> “Create Cluster”, we create a host cluster with the 2 host nodes.
- Now, on both the 2 hosts, a new special shared directory C:ClusterStorageVolume1 appears.
So we install “File and Storage Service” on iscsi01 using Server Manager -> Configure this local server -> Add roles and features -> Role-based or feature-based installation -> … -> Server Roles -> File and Storage Service -> iSCSI Target Server(add).
Then in Server Manager -> File and Storage Service -> iSCSI, use “New iSCSI Virtual Disk…” to create 2 .vhdx files: iscsi-1.vhdx (200GB) and iscsi-2.vhdx (1GB). In “iSCSI TARGETS”, allow hyperv01 and hyperv02 as Initiators (iSCSI clients).
In one host only, for example hyperv02, in “Disk Management”, we create and format a NTFS partition in the 200GB disk (remember to choose “Do not assign a drive letter or drive path”).
Server Manager -> Configure this local server -> Add roles and features -> Role-based or feature-based installation -> … -> Feature -> Failover Clustering.
Using “Storage -> Disks | Add Disk”, we add the 2 new disks: the 200GB one is used as “Cluster Shared Volume” and the 1GB one is used as Disk Witness in Quorum. To set the 1GB disk as the Quorum Disk, after “Storage -> Disks | Add Disk”, right click the host node, choose More Actions -> Configure Cluster Quorum Settings… -> Next -> Select the quorum witness -> Configure a disk witness-> ….
- On hyperv02, with Failover Cluster Manager -> “Roles| Virtual Machines | New Virtual Machine” we create five CentOS 6.7 VMs. For the purposes of this walk through, these five VMs are given names “my-vm1”, “my-vm2”, etc., and these are the names you’ll see used in the rest of the walk through.
- Use Static IP addresses and update /etc/hosts in all 5 VMs
- On hyperv02, in my-vm1’s “Settings | SCSI Controller”, add a 100GB Hard Drive by using the “New Virtual Hard Disk Wizard”. Remember to store the .vhdx file in the shared host storage, e.g.,
100GB-shared-vhdx.vhdx and remember to enable the “
Advanced Features | Enable virtual hard disk sharing
”. Next we add the .vhdx file to the other 4 VMs with disk sharing enabled too. In all the 5 VMs, the disk will show as /dev/sdb. Later, we’ll create a clustering file system (GFS2) in it.
- Similarly, we add another shared disk of 1GB (C:ClusterStorageVolume1quorum_disk.vhdx) with the Shared VHDX feaure to all the 5 VMs. The small disk will show as /dev/sdc in the VMs and later we’ll use it as a Quorum Disk in RHCS.
Make sure to choose “Store the virtual machine in a different location” and choose C:ClusterStorageVolume1. In other words, my-vm1’s configuration file and .vhdx file are stored in C:ClusterStorageVolume1my-vm1Virtual Machines and C:ClusterStorageVolume1my-vm1Virtual Hard Disks.
You can spread out the five VMs across the two Hyper-V hosts however you like, as both hosts have equivalent access to C:ClusterStorageVolume1. The schematic diagram above shows three VMs on hyperv01 and two VMs on hyperv02, but the specific layout does not affect the operation of the Linux OS cluster or the subsequent examples in this walk through.
Note: contact your network administrator to make sure the static IPs are reserved for this use.
So on my-vm1 in /etc/sysconfig/network-scripts/ifcfg-eth0, we have
And in /etc/hosts, we have
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
This completes the first phase of setting up Linux OS clusters. The Hyper-V hosts are running and configured, and we have five CentOS VMs running on those Hyper-V hosts. We have a Hyper-V Cluster Shared Volume (CSV) that is located on an iSCSI target, and containing the virtual hard disks for each of the five VMs.
The next blog post will describe how to actually setup the Linux OS clusters using the Red Hat Cluster Suite.
~ Dexuan Cui