JNT Visual - Fotolia
Vertical scalability, also known as resizing or scaling up, is when admins assign additional resources to a VM...
to make it bigger. It's the virtualization equivalent of adding processor sockets and dual in-line memory modules to an existing server. The hypervisor management GUI or command-line console helps quickly add vertical scalability, but there's a ceiling.
Admins can sometimes add resources to a VM while it's running. It's also common to create a new and larger VM and migrate the current workload into the larger VM. For example, Azure cloud admins can resize a VM through the Azure portal if they open the page for the desired VM, select Size and then select a new size from the list of available sizes. When the size of a running VM changes, it typically restarts, so there may be some brief workload disruption during scaling.
Consider the limits of VMware ESXi 6.7. Each VM can support up to 128 virtual CPUs (vCPUs), 6.1 TB of memory, VM disk file sizes up to 62 TB and up to 10 virtual network interface cards (NICs).
VMware vSphere supports up to four virtual SCSI adapters per VM, but each VM can handle up to 64 virtual SCSI targets per adapter. With expanded storage, admins can run up to 256 virtual SCSI targets.
VMware vSphere 6.7 introduced virtual non-volatile memory express support, which can maintain up to four virtual NVMe adapters and up to 15 virtual NVMe targets per adapter. This support means vSphere can now support up to 60 virtual NVMe adapters.
These are absolute vertical scalability limits for each VM that runs under a particular hypervisor, and different hypervisors impose different VM limits. Vertical scalability limits are slightly different for a Generation 2 VM using Hyper-V under Windows Server 2016 and Windows Server 2022.
Each VM can support up to 240 vCPUs, 12 TB of memory, VM disk file sizes up to 64 TB and up to 12 virtual NICs, including both legacy and Hyper-V-specific network adapters. Hyper-V VMs can also support up to four virtual SCSI controllers with a total of 256 virtual SCSI disks.
There are also scalability limits for VMs running Red Hat Virtualization (RHV) 4.4. Although the maximum number of concurrent running VMs is unlimited, each VM can support up to 384 vCPUs, 6 TB of memory and storage up to 8 TB.
Extreme vertical and horizontal scalability for virtualized infrastructure can potentially pose problems for virtualization management and monitoring tools. For example, RHV Manager is limited to 4,000 concurrent running VMs on a single RHV Manager.
The tool supports up to 400 clusters, 300 networks per cluster, 1,500 logical storage volumes per domain and maximum disk sizes to 500 TB. Organizations that demand high scalability levels may need to verify the capabilities of supporting tools or address the infrastructure's architecture to remain within the tools' limits.
Get up to speed on scalability
Storage technology might limit the ability to scale. Examine hardware's limitations before any attempt to scale vertically or horizontally. It might be necessary to upgrade hardware or reorganize disk groups.
Scaling can be a costly process. Analyze costs and benefits against workload requirements to determine how much scaling suits the infrastructure. Retrospective evaluation can then help make future scaling decisions.
A cost-benefit analysis might put IT teams between scale out and scale up, but teams can always do more analysis. Monitoring tools make this much easier. Take advantage of the objective data they provide on current infrastructure models to make an informed decision about scaling strategy.
In practical terms, however, it's rare to scale a VM anywhere near its absolute limits. Most enterprise workloads don't require -- or cannot practically use -- such extensive resources.
Even when the workload could use such extensive scaled-up resources, it's often better practice to scale out. Horizontal scaling spreads out the workload to multiple VMs -- preferably on different physical servers -- to meet the workload's performance demands. This strategy also improves the workload's availability and resilience -- eliminating the single point of failure that one huge vertically scaled VM creates.
Dig Deeper on Containers and virtualization
Related Q&A from Stephen J. Bigelow
Some enterprises avoid the public cloud due to its multi-tenant nature and data security concerns. Learn what data separation is and how it can keep ... Continue Reading
There are advantages and disadvantages to using NAS or object storage for unstructured data. Find out what to consider when it comes to scalability, ... Continue Reading
Fog computing vs. edge computing -- while many IT professionals use the terms synonymously, others make subtle but important distinctions between ... Continue Reading