pixel - Fotolia
Organizations can use virtualization scalability to quickly and effectively grow virtual environments to accommodate increasing workloads and user demands, but cost-benefit analysis is necessary to ensure scale doesn't lead to waste.
There isn't a single objective measure of how much scaling in a virtualized data center is too much. Perhaps the best way to gauge scaling is to perform a cost-benefit analysis on each workload and determine the virtualized resources that are most appropriate for each. When the cost of the resources provisioned to a workload outweighs the benefits that those resources provide, that might indicate there is too much scaling for that workload.
Consider a scale-up scenario where you'd add more resources, such as virtual processors and memory, to a VM. That virtualization scalability tactic makes the virtual server bigger. Theoretically, this allows that virtual server to do more work by letting the workload handle more requests and transactions. Although it's easy to allocate more resources to a VM, it's important to evaluate the results of virtualization scalability.
For example, suppose that basic monitoring reported a VM's average processor utilization as being above 90%, and the workload experiences unusually high latencies in its transactions. You might choose to scale up the vCPUs and memory provisioned to the VM to ease that computing bottleneck. You can then repeat the same measurements afterward. If average processor utilization and resulting workload latencies drop, the additional scaling might have been worthwhile. If the problems remain, even if the average processor utilization falls, other factors could be causing the problem, and the additional resources provisioned to that VM will be wasted and eventually require recovery for reuse elsewhere.
Learn best practices for making scalability decisions
Storage performance might impose limitations on your ability to scale. Consider upgrading technology or moving disk groups to make the process smoother. Storage preparation is key to well-planned scaling.
Once your storage setup is ready, the primary decision you will face is between scale up and scale out. Each path suits different contexts, but the best way to decide is to use objective data from monitoring tools. Monitoring also enables you to track whether your chosen strategy is working.
If you're attempting to scale vertically, be aware that different hypervisors can create limitations on how many resources you can allocate to your VMs. Vertical scalability is a useful tool to have in your arsenal, but you need to know what your system is capable of doing beforehand.
Evaluate virtualization scalability against workload needs
Clustering can be a valuable tool in the virtualized data center, but it's a game of diminishing returns. For instance, consider a scale-out scenario that clusters multiple instances of a workload for additional performance, redundancy and resilience. Each node added to a cluster multiplies the resources committed to a workload such that three VM nodes use three times the necessary resources for one node, for example. This can be invaluable for short-term virtualization scalability tasks, such as big data jobs, but too many nodes may be wasteful for ordinary enterprise applications.
Once again, it's crucial to understand the workload and its computing needs. If the goal is simple resilience, then the real question is whether the workload can run properly -- handling some minimum number of transactions or requests -- with one fewer node in the event of a node or server fault. If so, adding more nodes might qualify as too much scaling. If not, adding another node might be appropriate for that particular workload.
Scaling must also include an evaluation of physical risks. Servers fail, and VM migrations and restarts take time, so the physical placement of each VM can affect virtualization scalability decisions. For example, it's easy enough to fill a physical server with VMs, but that leaves less room for more VMs or additional resources. Additionally, more VMs create more risk because the underlying server becomes a single point of failure for all of the VMs running on that server.
You might choose to redistribute VM workloads so that multiple nodes of the same workload aren't on the same physical server, multiple mission-critical workloads aren't on the same server, and enough unused resources exist on each server to accommodate resource adjustments to VMs that need more resources over time. Leaving too many unused resources on a server can be wasteful. Virtualization scalability decisions require a perennial balancing act.
Dig Deeper on IT Systems Management and Monitoring
Related Q&A from Stephen J. Bigelow
Fog computing vs. edge computing -- while many IT professionals use the terms synonymously, others make subtle but important distinctions between ... Continue Reading
Learn how load balancing in the cloud differs from a traditional network traffic distribution, and explore services available from AWS, Google and ... Continue Reading
Access management is critical to securing the cloud. Understand the differences between AWS IAM roles and users to properly restrict access to AWS ... Continue Reading