How HPE SimpliVity InfoSight hyper-converged analytics works VxRail vs. Nutanix: HCI heavyweights square off
X

Disaggregated hyper-converged infrastructure vs. traditional HCI

A technical examination of what the pending demise of hyper-converged infrastructure 1.0 and rise of hyper-convergence 2.0 means for planning, buying, deployment and management.

Early on, hyper-converged infrastructure vendors marketed their products as low-cost, commodity computing platforms primarily designed for use in remote office/branch office or SMB environments. Since that time, hyper-convergence has grown up and will likely remain a fixture in the data center for a wide variety of use cases for the foreseeable future.

Hyper-converged platforms have also evolved with how they are purchased, deployed and managed. The clearest illustration of this evolution is the development of next-generation, disaggregated hyper-converged infrastructure products, also known as hyper-converged infrastructure 2.0 or HCI 2.0.

How does traditional hyper-convergence differ from disaggregated hyper-converged infrastructure, and what should organizations keep in mind from a planning, buying, deployment and management perspective? Let's find out.

Planning for HCI 2.0

One of the key planning tasks with next-generation hyper-converged infrastructure is determining how many nodes are needed. If the new hyper-converged platform will replace an existing hyper-convergence deployment, then the node count will likely be substantially lower than the number of nodes currently in place for at least two reasons.

First, one of the key characteristics of hyper-converged infrastructure 2.0 is hardware disaggregation.

Standard hyper-converged platforms tightly coupled compute, storage and network resources at the node level. If an organization needs to increase storage capacity, then it has to purchase additional nodes -- compute and networking included -- not just more storage; therefore, paying for compute resources it doesn't need. Similarly, if an organization requires more compute power to run a high-performance workload, for example, it would have to add nodes as well, which means paying for unneeded additional storage capacity.

Hyper-converged infrastructure 2.0 introduces the concept of disaggregated hardware. So, rather than integrating compute, network and storage resources at the node level, vendors offer storage modules that scale data capacity independently of other resources. So, an organization that has deployed a large number of traditional hyper-converged nodes -- simply because that was the only way to acquire the necessary storage -- will be able to decrease their node count and focus on adding storage modules instead by transitioning to a disaggregated hyper-converged infrastructure.

Hyper-converged infrastructure 1.0 vs. next-generation hyper-converged infrastructure 2.0

Second, adopting hyper-convergence 2.0 will likely reduce storage sprawl because the philosophy behind hyper-converged infrastructure has, in general, changed somewhat since the technology's inception. In the beginning, the industry marketed hyper-convergence as an inexpensive alternative to traditional data center hardware. As such, vendors based their hyper-converged infrastructure products on commodity hardware.

Today, there are still some hyper-converged systems that use commodity hardware, but most -- both traditional and next generation -- now use enterprise-grade hardware. For example, even the Hewlett Packard Enterprise SimpliVity 325 Gen10, which is HPE's entry-level model (selling for under $1,400), can be equipped with a 32 core CPU and 2 TB of RAM.

The point is nodes can become more capable when they are not constrained by commodity hardware. This, in turn, helps reduce node sprawl.

When purchasing a hyper-converged infrastructure platform, disaggregated or otherwise, it isn't just the hardware organizations should consider.

Buying hyper-converged

When purchasing a hyper-converged infrastructure platform, disaggregated or otherwise, it isn't just the hardware organizations should consider but the use cases for the hyper-converged product.

For example, HPE's SimpliVity is primarily designed for use as a virtualization platform. Even so, the SimpliVity line comes in at least four different flavors. HPE offers high-performance, expandable, and backup and archive versions of its SimpliVity 380 series. The company also sells the aforementioned SimpliVity 325 for SMB, remote office/branch office and edge deployments. While hardware may be the key differentiator between the various models, it is not the only one. The high-performance edition of the SimpliVity 380 is also the only SimpliVity model that can function as a Hyper-V host (it also supports VMware). All of the other models mentioned support VMware only.

HPE is not the only vendor that markets hyper-converged platforms for specific use cases. Dell EMC does something similar with its hyper-converged infrastructure portfolio. The company designed its VxRail as a VMware hosting platform, while its VxFlex platform supports multiple hypervisors for server-based SAN use. There's also Dell EMC Cloud, which is a hyper-converged platform designed for Azure Stack.

Other examples of hyper-converged infrastructure 2.0 products and their use cases include HPE Nimble Storage dHCI, Datrium Automatrix and NetApp HCI.

Introduced last year, the HPE Nimble Storage dHCI is HPE's hyper-converged 2.0 offering. The dHCI stands for disaggregated hyper-converged infrastructure. It packages HPE ProLiant servers with Nimble Storage arrays in a single chassis and allows users to scale storage and servers independently.

Datrium Automatrix disaggregates storage from physical hardware to consolidate primary storage, backup and DR. It features a local cache in the compute host/server and centralized storage to hold older data and can store data on premises or in the cloud. Datrium Automatrix products include its DVX Data Nodes, Compute Nodes, S3 backup as a service and ControlShift workload mobility DR as a service.

There is also NetApp HCI, which is an all-flash system based on that vendor's SolidFire technology. Similar to the HPE Nimble Storage dHCI, NetApp integrates separate servers and the SolidFire flash storage but contains them in a single chassis, with the ability separately add additional compute or storage capacity. NetApp targets the platform at hybrid clouds, end-user computing/virtual desktop infrastructure the consolidation of workloads.

Deploying hyper-converged 1.0 and 2.0 systems

Hyper-convergence's modular infrastructure has historically made it easy to deploy. The prebuilt nodes already include all the necessary drivers and vendors largely automate the setup process.

While this same basic concept may still apply to disaggregated hyper-converged infrastructure, the deployment process may not necessarily be quite as easy as with the standard models. In some cases, simplicity is sacrificed in the name of flexibility.

Take Datacore's hyper-converged infrastructure 2.0 platform as an example. The company now supports hyper-converged infrastructure -- which it refers to as Hybrid Converged Architecture -- connectivity to SAN storage. The use of a back-end SAN is by no means mandatory, but if an organization does choose to connect its Datacore hyper-converged platform to a SAN, it is unlikely the process will be fully automated.

traditional hyper-converged infrastructure

Managing hyper-converged infrastructure 1.0 vs. 2.0

As is the case with the early generations of hyper-convergence, disaggregated hyper-converged infrastructure vendors typically include a management layer for users to manage the hyper-converged platform as a whole, rather than having to manage each node individually.

Management tools and capabilities differ among vendors. HPE provides management and predictive analytics capabilities through InfoSight for HPE SimpliVity. Similarly, Dell EMC offers a tool called VxFlex Manager for managing VxFlex-based systems (Dell's VxRail systems are managed through VMware software). Likewise, Nutanix enables management of its systems through its Prism application.

Other examples include Datacore, which provides management functionality through its Datacore Insight Services, and NetApp which uses its NetApp Element software to manage both NetApp HCI and its SolidFire products.

Disaggregated hyper-converged infrastructure 2.0

In many ways, managing hyper-converged infrastructure 2.0 isn't all that different from managing a hyper-converged 1.0 deployment. After all, hyper-convergence vendors have always provided a management layer. One thing that is starting to change, however, is that some vendors have begun integrating machine learning capabilities into their management tools.

For example, HPE's InfoSight for HPE SimpliVity is now AI-enabled, which allows the management layer to examine trends and make consumption related forecasts. The AI engine can also detect conditions that may signal that a problem is about to occur and take corrective action when appropriate.

Like its predecessor, hyper-converged infrastructure 2.0 allows organizations to take a modular approach to deploying infrastructure. Disaggregation has enabled hyper-convergence to become much more flexible than it ever has been. It allows organizations to scale storage and compute independently of one another, driving down total cost of ownership.

Next Steps

A close look at different types of hyper-converged architectures

Traditional vs. converged vs. hyper-converged infrastructure setups

Why hyper-converged edge computing is coming into vogue

Hyper-converged and composable architectures transform IT

5 hyper-converged infrastructure trends analysts predict for 2023

Dig Deeper on Converged infrastructure management

SearchWindowsServer
Cloud Computing
Storage
Sustainability
and ESG
Close