WavebreakmediaMicro - Fotolia
- Scott D. Lowe, ActualTech Media
Hyper-converged infrastructure has become an increasingly complex segment of the data center architecture market over the last few years. The definition of hyper-convergence has expanded as this technology continues to fracture into more targeted products. One critical aspect of HCI that's evolved over time is hypervisor support.
Examples of hypervisors used in hyper-convergence vary. Early iterations of common hyper-converged systems focused on extending the capabilities of offerings from companies such as VMware and, later, Microsoft.
Early hyper-converged offerings for VMware vSphere or Microsoft Hyper-V may look and act like modern examples of hypervisors in hyper-converged products, but they were intended to simplify and lower the cost of storage for smaller organizations, and that was pretty much it. But over time, some HCI vendors turned to the open source kernel-based virtual machine (KVM) hypervisor as an alternative. HCI pioneer Nutanix is one of the vendors with its own KVM-based hypervisor, but there are others.
Let's explore examples of hypervisors used in HCI over the years as we assess how the role of the hypervisor in hyper-convergence has changed and continues to evolve.
The rise of vSphere
Support for vSphere made a lot of sense for early HCI vendors. VMware was, and still is, the hypervisor-market leader, even if the product's dominance is fading with the rise of Microsoft's Hyper-V and KVM. Businesses seeking cheaper alternatives to virtualization are turning to the public cloud to meet needs, such as moving email off premises and deploying customer relationship management platforms. Because of these changes, there's less need for on-premises services that include vSphere. VMware is, of course, working hard to counter vSphere's decline with the constant addition of new services and new operational methodologies, including the ability to operate directly in Amazon data centers.
One reason for vSphere's dominance has been the broad ecosystem that has come into existence over the past decade and a half as VMware has grown. Today, there are hundreds, if not thousands, of companies providing products that support vSphere.
So considering vSphere's massive ecosystem and mature features, why would KVM even be on people's minds? Cost is one reason, but there are several others.
KVM allows enterprises to reduce hypervisor licensing expenses. Given its enterprise licensing model, Hyper-V generally isn't that expensive. However, vSphere customers have faced steadily increasing costs over the years, giving CIOs and other decision-makers tough choices. VSphere is the core of many data center environments, and supplanting it is expensive and technically challenging. But as the price delta among vSphere and its competitors increases, the financial justification for making the jump has become easier as organizations seek to avoid the "VMware tax."
Seeing an opportunity, HCI vendors have either introduced support for KVM or built their platforms around the open source hypervisor. Hyper-convergence for KVM began in a simple manner because of the way many hyper-converged products are designed. Most use what's called a virtual storage appliance. The VSA is the running VM that manages storage local to a hyper-converged node and coordinates with VSAs running on the other nodes in a cluster. The VSA model provides flexibility, enabling hypervisors to be swapped out as long as the HCI vendor supports swapping. Early on, some vendors augmented vSphere support with Hyper-V or KVM support.
Some hyper-converged companies have built their entire platform around the KVM hypervisor, going beyond simply adding KVM support. In these cases, KVM forms the core of the architecture with no plans to add support for other hypervisors. Whereas VSA-centric hypervisor-support models use a VSA, HCI architectures without multi-hypervisor support don't need to worry about this abstraction of storage management. The OS on the local node provides access to local storage, and, in some cases, a kernel hypervisor module may accomplish this goal. There's a sense of simplicity in these VSA-less architectures and, depending on how the product is built, there can be performance benefits, too.
Moreover, HCI vendors that have chosen to use KVM as a key part of their platform -- such as Cloudistics Inc., Nutanix and Scale Computing -- have modified and extended KVM to meet the feature-set expectations for hypervisors in the modern enterprise. For example, Scale Computing included its patented storage system, called Scribe (Scale Computing Reliable Independent Block Engine). Not technically added to KVM, Scribe integrates with Scale Computing's KVM variant to provide completely integrated storage with increased performance over other HCI platforms that rely on VSAs and other storage features.
Storage is just one aspect of a complete hyper-converged platform, of course. Cloudistics and Nutanix are bringing more networking capabilities into their platforms and adding more features customers can use alongside the KVM hypervisor. Cloudistics, for example, features a software-defined network that enables deeper application control. Nutanix recently introduced Flow, which provides advanced networking and application-centric network security capabilities. Nutanix's Flow service is tightly integrated with its AHV virtualization, which is based on KVM. AHV is the hypervisor for Nutanix Acropolis, the OS for the company's HCI.
Is the startup risk a major concern?
There are tradeoffs to using KVM or any of the other examples of hypervisors cited here. The VMware ecosystem is vast, and VMware isn't going away anytime soon. Nutanix became a public company in 2016 and appears to be doing well, but others that provide KVM-centric hyper-converged systems are still privately held startups. These include Cloudistics, Maxta and Scale Computing. And no matter how strong a technology appears, there's always the risk the company you've bought it from won't make it over the long term.
My advice is, because you don't know whether the vendor you chose for your hyper-converged system will be around forever -- and how could you? -- consider a shorter time horizon. If your company's replacement cycle is three to five years, that's the length of time that you need that vendor to survive. It just needs to make it five more years after you buy, and you'll be at your next replacement cycle.
Education, skills and training
Any enterprise that's been involved in virtualization for any length of time will have IT staff with deep skills in VMware or Microsoft technologies. As such, learning KVM may be a bridge too far, particularly if you're also making the jump into a new architecture, such as hyper-converged infrastructure.
Here's the thing, though: Hyper-converged infrastructures that use KVM as the only hypervisor option also include easy-to-use administrative interfaces that hide most, if not all, of the complexity that may accompany a switch.
I have a three-node Scale Computing cluster in my lab that I use for testing and building projects. I describe its hypervisor management interface as uncomfortably bare. That's not intended as a negative statement, and although I don't like the term nerd knobs, I will say that Scale Computing's management interface doesn't have many at all. It's just point-and-click. If you want to move a VM to another node, just click on the "Move VM" icon and choose a node. Setting up a schedule to send snapshots to a remote cluster requires only a couple of clicks.
Other KVM-centric hyper-converged platforms have more hypervisor functionality than Scale Computing (for example, Cloudistics and Nutanix AHV), but they come with more complexity. That may work for some people and not for others. If you have a hardcore virtualization background, you might want deeper configuration parameters that aren't there with some products. If you can't tweak every parameter, how can you get the most out of your cluster? Compare and contrast hyper-converged products before making the jump so you get the level of hypervisor configurability you want.
Perhaps the biggest issue when it comes to skills is how to migrate hundreds or thousands of VMs from vSphere to KVM. Fortunately, there are tools available to help with these transitions, such as Red Hat's virt-v2v or Carbonite's HC3 Move Powered for Scale Computing. These tools help customers migrate vSphere VMs and Windows Servers to the KVM format.
The future for KVM in HCI
Why would hyper-converged vendors go down the KVM road when vSphere still dominates, Hyper-V continues its rise and enterprises are looking to the cloud for more workloads? As mentioned earlier, cost is a big factor. KVM is less expensive than VMware, and HCI vendors do a good job filling in any features and capabilities left behind when a customer moves to KVM.
It goes deeper than that, however. HCI vendors are seeking ways to differentiate themselves and provide complete platforms to customers. To maximize value, vendors must have control over the full stack. For hyper-converged vendors that only embrace vSphere or Hyper-V, there's little hypervisor customization. They can't change the internals of the hypervisor, a core part of any hyper-converged system, to better meet their product or customer's needs. Because it's open source, KVM is a different story. Vendors are free to make any changes necessary to meet the design goals of their HCI platform.
A KVM hyper-converged support report card
Popular examples of hypervisors used in hyper-converged environments include VMware vSphere and Microsoft Hyper-V. Many hyper-converged vendors have jumped onto the open source KVM-hypervisor bandwagon as well. Here's how some of them stack up.
Cloudistics. The Cloudistics Spark hypervisor features KVM virtualization as an underpinning and forms the basis for the entire Cloudistics platform.
Maxta. Maxta's VSA model supports multiple hypervisors, primarily KVM, OpenStack and vSphere. Maxta's VSA model makes it easier for the company to add support for alternative hypervisors.
Nutanix. Nutanix has quickly risen to a prominent position in the world of hyper-convergence. Since its inception, Nutanix has supported vSphere. However, as a key differentiator, it developed the Acropolis Hypervisor, now known simply as AHV. It's a heavily customized version of KVM with additional capabilities intended to bring it into the enterprise world. For example, with AHV, customers are able to extend an HCI cluster running vSphere with AHV storage-only nodes, meaning they can add more storage without having to pay for more vSphere licenses.
Scale Computing. Scale Computing's hyper-convergence platform is built around an extended version of KVM. It doesn't use a virtual storage appliance or VSA virtualization model in its platform the way, for example, Maxta does. With native support for KVM, and only KVM, Scale has designed a system that minimizes abstraction and provides comprehensive hypervisor services to its HCI system.
Public clouds bring even more reason for HCI vendors to embrace a KVM-centric model. Google and Amazon provide support for KVM-based workloads. Google's Cloud Platform runs atop a heavily modified and hardened KVM variant, and in late 2017, Amazon launched its Nitro KVM-based hypervisor. Over time, Amazon will convert all instance types to Nitro and phase out its use of the Xen hypervisor.
It's easier than ever to create hybrid cloud environments that span public cloud and on-premises data center environments without having to constantly transform workloads among architectures. For adopters of KVM-centric hyper-converged infrastructure systems, it's becoming common to find partnerships among the HCI vendor and KVM-centric cloud providers. Those partnerships make it easy for organizations to create environments that span both on-premises and cloud environments.
I don't believe open source is the only way to go for everything enterprise. However, in the case of hyper-convergence, I do believe KVM's open nature makes it incredibly attractive for vendors working to build comprehensive hyper-converged platforms. There are benefits -- cost; choice; and new opportunities, such as workloads, applications and ways of thinking about the data center -- for customers as well.
There are downsides, of course. These include adjusting IT staff skill sets, adapting to a less mature ecosystem and migrating away from other often more established, comprehensive technologies, such as vSphere. Vendors offering KVM-centric hyper-converged systems are working to minimize the downsides. Over time, I expect them to achieve parity across most areas with more established platforms that use more expensive hypervisors in their hyper-converged product line.