Correcting misconfigured ESXi shared storage settings
IT shops that started small may find they need to update their ESXi shared storage settings to prevent significant issues when a host fails.
Misconfigured shared storage is not rare in small VMware environments. Taking the time to correct these settings will avoid a lot of pain if an ESXi host gets overloaded or fails.
There are three fundamental elements to a good vSphere deployment: multiple identical ESXi servers, shared storage and vCenter. Any data center with multiple ESXi servers should have shared storage that is consistently available across clusters of servers.
Shared storage is an enabler for some core vSphere technologies, specifically vMotion, Distributed Resource Scheduler (DRS) and High Availability (HA). VMotion allows a running VM to be moved from one ESXi host to another with no downtime and is used by DRS to move VMs to the ESXi host, where they are least likely to run out of CPU and RAM. If an ESXi server fails, then vSphere HA restarts the VMs that were running using other ESXi hosts in the cluster. DRS and HA only work for VMs that are kept on shared storage; the VM doesn't change its storage location with either HA or DRS migrations.
For HA and DRS to work optimally, all storage used by the VMs must be available to every ESXi server in the cluster. The configuration required for consistent shared storage depends on the storage network used: Fibre Channel (FC), iSCSI or Network File System (NFS). For FC and iSCSI storage you need to make sure the storage array is presenting the same logical unit numbers (LUNs) to every ESXi host. Some arrays make this easy with groups, and others make you set up each presentation to each host separately.
For FC, you need to set up the FC switches with consistent zoning so all your hosts see the array. If you are using iSCSI, then your ESXi host needs to have the same discovery setup, the same list of dynamic discovery IP addresses.
If you are using NFS, then most of the setup is done at the ESXi server. Make sure you use the same NFS server name and share path on every ESXi host. If one ESXi host has the IP address for the NFS server, another has the host name and a third ESXi host uses the fully qualified name, then vCenter won't think they are the same data store. The NFS server name and the share path must be identical on all the hosts for vCenter to think it is the same data store. NFS and iSCSI setup for the ESXi hosts can be automated using a vSphere Command Line or PowerCLI script, making it simple to ensure the same commands are used to build each host.
Checking your clusters for consistent storage is a simple matter with the Maps tab in the vSphere client. Select your cluster -- not the ESXi host -- in the Hosts and Clusters view of the vSphere Client, click on the Maps tab, then turn off all relationships apart from Host to Datastore and click Apply Relationships. Now, you should have a mesh of hosts and data stores.
If every data store has a line to every host, then all is well. Data stores with lines to some but not all hosts can cause problems. If only some hosts can see the data stores, then only those hosts can run the VMs that use the data stores. Hosts not connected to these data stores may have the wrong settings. Some data stores will have lines to only one ESXi host; this is usually the local disks inside the ESXi hosts and cannot be shared. Be careful not to place VMs on the local data stores unless the VM is only used on the ESXi host such as a vShield agent VM.
Turning on the VM to Datastore relationship will make the map messier, but will also show which VMs are on shared data stores and which are using nonshared data stores. If you have VMs on storage that isn't shared, then work out whether you can use Storage vMotion to relocate the VMs to a different data store; you will need enough free space on the destination. Use the Migrate option from the VM menu and choose Change Datastore, select the destination data store and wait.