Most of the largest server and storage vendors offer hyper-converged products, leaving buyers with a plethora of sometimes confusing options.
The hyper-converged infrastructure (HCI) definition has shifted a bit over the years, and there is often disagreement on exactly where to draw the line as new and innovative hyper-converged products hit the market. There is no end to the debate over what is and is not "true" HCI. Here, HCI is generally defined as data center architecture that either loosely or tightly couples servers, storage and a hypervisor. Do not ignore the "loosely" or "tightly" part of the previous sentence, as it is this aspect of the architecture that creates the most angst among pundits about whether new self-identified HCI products actually deserve the moniker. We will discuss this in more depth later.
Hyper-converged infrastructure vs. legacy infrastructure
Hyper-convergence is both an evolution of and an alternative to more traditional infrastructure approaches that maintain a complete separation of the various resources -- servers and storage -- in the data center.
Hyper-convergence is an evolution in the sense that it is the next step in data center architecture and attempts to solve some of the shortcomings of the traditional three-tier architecture. In the case of HCI, the intended improvements include an easier overall management experience, easier scaling or growth of the environment, and a lower overall total cost of ownership, thanks to the reduction in the administrative burden. The easing of the management burden can result in an environment that is operated by IT generalists rather than resource specialists. These people have the ability to do other things in IT as well, so there can be some economies of scale on the human side of the equation.
But, HCI cannot and will not fully supplant traditional approaches, which remain valid. There are organizations that prefer to carefully tailor their data centers or that are operating workloads that they believe are not a fit for an HCI product. As such, a traditional approach to infrastructure -- one in which the organization buys separate servers, storage and storage fabric devices and builds the environment themselves -- is and will remain a viable option for many.
Hyper-converged infrastructure workloads
One of the most common early HCI use cases revolved around virtual desktop infrastructure. VDI environments are generally predictable and customers can more easily plan such workloads with metrics that include how many desktops can reside on a single node. As the virtual desktop count approaches this figure, it is time to add another node to support the next desktop deployment wave.
But virtual desktops are just part of the story. With HCI, you also can quickly deploy new virtual desktops as they are needed. HCI technology makes it simple to plan for new workload needs, since figuring out when you need to add more nodes is just a matter of some simple arithmetic. This rapid resource allocation capability extends beyond desktops to other kinds of workloads, too. It is a simple formula: Add nodes and get resources.
One of the original downsides to hyper-converged around VDI was its inability to work with graphics acceleration hardware, which is an absolute must for many applications, such as CAD design. But today there are several graphics acceleration options available for those deploying VDI on HCI. Organizations are running even their most mission-critical and sometimes unpredictable workloads on HCI systems. With few exceptions, almost any kind of application is suitable for modern hyper-converged products.
Hyper-convergence also shines when it comes to supporting highly distributed environments, such as companies with hundreds or thousands of small branch locations. In these environments, attempting to build a traditional IT environment is cost-prohibitive. With the ability for hyper-converged systems to start very small with as few as two or three nodes, the technology can be a boon for such environments.
Today, big data and analytics are also burgeoning applications that have relatively linear resource growth needs that can be nicely satisfied with HCI. But, this does not mean that only linear-scale applications are good candidates for the technology. As mentioned, almost any application can operate well.
Using extensive research into the hyper-converged infrastructure market, TechTarget editors focus this article series on 10 market-leading vendors that offer software-defined storage and appliances. Our research includes data from TechTarget surveys and reports from other well-respected research firms, including Gartner.
Hyper-converged infrastructure benefits
Organizations everywhere are seeking ways to do IT better and to optimize their environments.
Scalability helps reduce the storage footprint. One of HCI's most notable characteristics is scalability without the need to implement a separate storage networking fabric. When companies need additional computing capacity or storage capacity, they simply add more nodes to their HCI cluster. HCI clusters connect using nothing more than standard Ethernet switches, making them simpler to deploy and manage than some storage fabrics. HCI is built with this kind of scaling capability included. With hyper-converged technology, the separate storage layer is mostly eliminated, although there is an emerging HCI variant that takes a more relaxed approach to server and storage integration.
The ability to choose the hardware and configuration that best suits your organization's needs versus an all-in-one HCI appliance. Many people believe that HCI rejects customer attempts to granularly configure individual resources. It has been described as a data center in a box and, to be fair, that description is not necessarily wrong. Several vendors sell preconfigured hyper-converged appliances that cannot be opened up and tinkered with by administrators. However, some HCI vendors do allow this, which enables customers to specifically configure resources to an exacting level.
Many organizations prefer the black box appliance, as it takes the challenge out of trying to figure out resource levels. For others, the ability to configure resources brings HCI closer to a traditional approach that may be more comfortable and appropriate for that organization.
HCI eliminates the need for storage, networking and server specialists. Perhaps one of the biggest potential impacts of HCI is on the IT team. Whereas a traditional environment requires hordes of deeply trained specialists, an HCI deployment and the ongoing management can be accomplished with IT generalists. A generalist can often accomplish a broader set of duties, which can be beneficial.
But do not think that hyper-converged infrastructure management is always simple. Although you will not have to create logical unit numbers or define zones, you will have to learn how to manage the new system, including how to create new workloads and develop data protection policies.
HCI deployment options
There are generally three ways to deploy an HCI cluster:
- Integrated hyper-converged appliances. These are devices that you buy off the shelf, install into a rack, plug in and go. They are preconfigured, so you do not typically have the ability to define granular resource definitions. HCI appliances sometimes ship with the hypervisor software ready to go as well.
- HCI software that is sold as part of a reference architecture model. If you need or want more granularity, or you want the ability to design your resource configuration, you might consider a software-based approach. With these hyper-converged products you procure your own hardware through an approved partner or, in some cases, you simply reuse hardware you already have, as long as it meets minimum requirements.
- Hyper-converged infrastructure as a service in the public cloud. Given the prevalence of the public cloud, it is no surprise that HCI vendors are also creating clusters that can straddle the public and the private cloud and enable easier workload mobility between the two environments.
Hyper-converged infrastructure features
Just because you are running what is considered a simplified architecture, that does not mean you are giving up anything. In fact, with HCI, you generally get a comprehensive set of features comparable to what you would expect to find in a combination of servers and storage. And, as hyper-converged technology evolves, vendors continue to add even more features. Of course, not every vendor supports every feature, so make sure you look carefully before you make the HCI leap.
Hypervisor feature versus a separate product or integration. The hypervisor is an interesting component in HCI. In some cases, you do not get much of a choice around the hypervisor. For example, it is doubtful that VMware will ever support anything but vSphere for VSAN. With other HCI vendors, you get a choice and, depending on the vendor, can run vSphere, Microsoft's Hyper-V or the open source KVM. Moreover, there is a subset of vendors that have heavily modified KVM and call it their own with a goal of helping organizations reduce the amount of money they give VMware annually.
Most HCI products include standard storage management features, such as inline deduplication, compression, replication and other types of data protection. Similar to dedicated storage arrays, these features can help differentiate hyper-converged products during the buying process.
A look at the hyper-converged market
The HCI market is filled with numerous established vendors and relative newcomers. For example, newcomers Datrium and NetApp have pushed the boundaries of hyper-converged architecture to create products that meet a broad range of applications. More traditional players, including Maxta, Nutanix, Pivot3 and Scale Computing, continue to innovate as well, offering a wide array of hyper-converged products. Traditional vendors, including Dell EMC, Cisco, Hewlett Packard Enterprise and VMware, are also a big part of this market, as they offer comprehensive product portfolios.