Anna Khomulo - Fotolia
When you design vSphere data stores, consider how your environment changes over time. Make sure you can accommodate for changes to your VMs without a complete storage system redesign.
First, consider the requirements of your workloads. VM capacity and performance requirements vary depending on the amount of data the VM holds. As the data volume in your VMs grows, you might require greater storage performance to account for this.
Keep extra capacity available, monitor your data stores' free space and act quickly on low space alarms. If you thin-provision your VMDK files, ensure you use the UNMAP command so you can shrink those files to reclaim some space.
VM data is ever increasing
Some VMs grow slowly, and some more rapidly. Either way, once data reaches a point where it nearly fills a VM's disk, you must allocate more capacity to that disk.
To assign that capacity, you must have space available on your data store. Luckily, you can expand the VMDK file while the VM is running.
Most guest OSes rescan their disks, which enables you to increase partition size and make more capacity available. You can also use the data store-vmdk-partition process to free up space.
Thin-provisioning your VMs
You can simplify VM disk file size management by thin-provisioning your VMs. The guest OS inside a VM sees the allocated size of its disks, but when you thin-provision VMs, the VMDK file doesn't automatically consume as much space.
Thick-provisioned VMDK files have the same allocated size as their size on the data store. For example, a 200 GB VM drive has a 200 GB thick-provisioned VMDK file. It uses the same 200 GB of data store space for its entire life.
A thin-provisioned disk file starts small and grows to accommodate new data. However, this continued growth means that you must consistently monitor data store free space and allocate more capacity before the data store fills up.
Thin-provisioned VMDK files are some of the few storage objects that can shrink. To shrink a VMDK file, you must use the SCSI UNMAP command and have vSphere 6.5 or later.
VM snapshots take up storage
When you take a snapshot of a VM, the original disk file also remains in the system. The system uses this original file to read unchanged data. Meanwhile, all new VM disk writes go to a new file containing an up-to-date version of every disk block change. This snapshot disk file grows like a thin-provisioned disk, but faster.
If a VM has multiple snapshots, each snapshot has its own VMDK file. Although only the latest snapshot file grows, prior snapshot disk files still take up space. Delete snapshots to free up space. The parent disk should still contain the up-to-date blocks.
Usually, production VMs have a single snapshot, which remains present only during backup. Non-production VMs might have more snapshots and, therefore, might require more data store space.
Accommodate growth in data stores
When you design vSphere data stores, you must accommodate changes in VM space requirements. Certain vSphere alarms warn you of space consumption, alerting you at 20% and 5% free space. A 20% warning tells you to pay attention to data store capacity, plan to grow the data store or delete errant snapshots. The 5% warning means you must take immediate action, as the data store has almost run out of space.
Most modern arrays enable you to create data stores from pools of capacity. To accommodate growth in your data stores, ensure that your storage pools have free capacity. In other words, don't allocate all your disk space to data stores right away, because you can't always predict capacity growth.
Changes in infrastructure
If you design vSphere data stores properly, you shouldn't have to rebuild your guest OSes every time you replace the physical servers. Instead, you can simply replace the ESXi servers and the storage array underneath your VMs.
However, remember your limitations: the number of storage paths permitted per ESXi host, and the effect of replacing one storage array with another. When you transition from one array to another, your ESXi hosts might try to use both arrays, which effectively doubles the number of storage paths. You might struggle in your migration process if you use up more than half of the storage paths in your original array. As you retire paths to the old array, remember to enable more paths to the new array.
You can come across the same issue if you use VMware Site Recovery Manager for disaster recovery. Failover and failover testing involve more storage paths per ESXi server than replacing a storage array does.