JNT Visual - Fotolia

Tip

Using shared disks to set up virtual clusters in ESXi

Thanks to some expert advice, we've come up with some best practices for building a Red Hat-based Linux cluster in an ESXi environment.

One of the essential elements of vSphere virtualization is the ESXi cluster -- a group of ESXi physical hosts grouped...

together to pool hardware resources. When paired with VMware vCenter, these clusters can help make the VMs you're providing resources for highly available. To further boost availability, users can create failover clusters to run on top of their ESXi cluster.

This article is not intended to be a comprehensive tutorial on building a fully-fledged cluster on top of a VMware ESXi cluster, but rather as a basic primer on best practices for setting up the VMware side of a Red Hat-based Linux cluster using shared disks. Unfortunately, VMware doesn't appear to have an up-to-date best practices document for cluster settings.

Depending on who I asked, there was initially some debate as to whether or not Red Hat clustering is supported in an ESXi environment. This document should not be considered a substitute for official VMware documentation on configuration; any changes you make are at your own risk. Be sure to try this out on a test cluster before doing anything in production.

That said, after much experimentation, I have assembled the best advice from several discussions with knowledgeable professionals from Red Hat, VMware and others.

Formatting virtual cluster shared disks

The first thing to understand about virtual clustering is the optimal format of the virtual cluster's shared data disks. The recommended way of creating baseline virtual machines (VMs) is to use thick provision eager zeroed shared disks attached to the relevant controller at creation time. Assuming you are using two controllers, Small Computer System Interface (SCSI) controller 1:0 is your best option.

The SCSI controller that holds the shared data disks should have only the cluster-wide shared disks attached. You don't want to "mix and match" disk assignments. Every disk assignment should be identical across all the virtual cluster nodes.

In each virtual cluster node, the first SCSI controller (SCSI 0:0) should be used for the operating system, application file systems disks and swap space. In our example, the second controller (SCSI 1:0) is designated for the shared data disks for the cluster.

All the shared disks should be formatted as thick provision eager zeroed. If you neglect to do so, you'll run into a series of issues. If you have already created your cluster and disks, you can check the disk configuration for them using the vmfstools command as documented in VMware's Knowledge Base on the subject.

Designating authority for virtual cluster shared disks

The second part of setting up clusters on ESXi requires us to tell the VM host who is in charge of the virtual cluster's shared disks. If these settings aren't changed from the default, the Red Hat cluster and ESXi host will fight over file and disk locks. This can cause clustered shared disks to go into read-only mode. To configure locking correctly, you must perform a series of steps.

Begin by ensuring all virtual clustered nodes are powered down. Next, if you haven't already done so, create the shared disks and remember to add and use a second SCSI controller as needed. Add in the shared disks to all subsequent cluster nodes so they can be seen by all virtual cluster nodes. Again, remember to put the shared disks on the correct SCSI controller in the same order on each machine. For each virtual cluster participant, make sure the SCSI controller that holds the shared disks has physical disk sharing set to "None."

The next step is to set up the multiwriter flag. One way of doing this is to use a secure shell client to log into one of the host that has the VMs in question, open the VMX file of each virtual cluster node and add the items listed below. Substitute the SCSI controller IDs and logical unit numbers for ones that apply to your configuration. You will need to add SCSI multiwriter flag settings such as the number of disks and controllers to fit in with your configuration. You can use "#" to add comment lines as needed.

# Enable disk UUID usage.

diskEnableUUID = "true"

# Enable multi-writer flag for shared storage.

scsi1:0.sharing = "multi-writer"

scsi1:1.sharing = "multi-writer"

If you aren't interested in modifying VMX files, you can add entries using the GUI advanced Vim editor. This can be found in the native client under "General/Advanced Settings," where you can add the commands. The access to the advanced portion is shown in Figure A.

Adjusting controller node settings.
Figure A. Adding entries with the GUI advanced Vim editor.

Enabling the Universal Unique Identifier (UUID) component allows the guest OS to enumerate the disk using the UUID and allow multipathing on the server. This is required to allow the clustering software to perform correctly.

At this point, you should be able to power up the servers without issue. It is advisable that you ensure an affinity rule exists to keep the virtual cluster nodes on separate hosts within the cluster prior to powering on; otherwise it defeats the purpose of having a cluster.

Setting the VM's SCSI bus sharing to "None" in the vSphere Client has the effect of absolving the VMware host of any disk-related locking or management and passes the management overhead to the cluster. Without this, you would see a situation where both the VMware and Red Hat infrastructure would clash over locking, thereby causing potential issues such as the file system becoming read-only.

When added to the VMX file, these commands enable a number of discrete functions within cluster nodes. As its name suggests, the multiwriter flag in the VMX file allows access from multiple guests.

Don't forget to create a rule for VMs to run on separate hosts. Figure B shows an example of how to set this up.

Creating a VM/Host Rule.
Figure B. Creating a VM/Host Rule.

One thing worth noting is that while the methods shown above work fine, you should refrain from using any fancy replication or nonstandard technologies at the storage level, as these can create issues.

Next Steps

Setting up an ESXi shared virtual disk

How do you verify a cluster is working?

Getting to know VMware vSphere basics

Dig Deeper on VMware ESXi, vSphere and vCenter

Virtual Desktop
Data Center
Cloud Computing
Close