Hyper-converged infrastructures were heralded as a means of easing the complexities of platforms that depended on different discrete systems for compute, storage and networking by bringing everything together in a single, fully provisioned and managed chassis. However, times have changed, and although hyper-converged still has a part to play under certain circumstances, the benefits aren't as clear cut as they once were.
Within the modern data center, the trending move is away from physical platforms to far more virtualized ones with cloud playing an increasing role. Hyper-converged infrastructure (HCI) aimed to provide such a virtualized platform, but it still suffered from essential headroom constraints based on immediately available resources. With many modern workloads, such as virtual desktop infrastructure, being of a variable nature when it comes to resource requirements, highly virtualized resource-as-a-service platforms are well suited to support them, whereas physical platforms can bring constraints with them.
Here, the four main disadvantages of hyper-converged infrastructure are described.
Hyper-converged platforms generally allow for extra resources to be bolted on -- if that extra capacity comes from the original vendor. Such expansion systems might not come as specifically targeted resources and might require you to buy extra CPU and networking when all you need is storage, for example. The use of commodity hardware to provide the extra resources required generally takes the platform outside of the hyper-converged environment, losing levels of control and the capability to optimize performance that is gained through adding in more proprietary -- and expensive -- hardware.
An aspect of cloud computing that has proven very attractive to those who use it is how a cloud platform can be built up and expanded using low-cost commodity hardware. A commodity cloud won't perform as well as a dedicated hyper-converged platform for all workloads -- and this is where buyers must differentiate. Having a workload that's heavily resource-dependent -- e.g., data analytics where data throughput from storage to CPU and the overall performance of the CPU is carrying out the analysis -- might mean that a dedicated hyper-converged platform makes sense. However, when it comes to producing and publishing reports against the analysis, a lower-cost cloud environment might provide the more cost-effective approach.
2. Power requirements
Overall power densities in data centers have increased as IT managers have sought to cram more workloads into the available space within an existing facility. Hyper-converged systems can bring this issue to a head, with kW/m2 power requirements being outside of original design specifications. This is less of a pressing issue for many now, as more workloads can be offloaded onto third-party cloud platforms.
Indeed, some organizations are now finding that the problem they have is that the data center is too large for their needs -- with the concomitant impact on the costs of cooling such a facility if an overall space cooling approach has been taken. Therefore, implementing highly power-dense systems where a redesign and new implementation of power distribution within an existing facility might not be cost-effective.
3. Cloud compatibility
Most hyper-converged platforms are capable of supporting virtualization out of the box and will come with their own management software to enable the easy management of workloads on the platform. These capabilities shouldn't, however, be confused with a true cloud environment.
The capacity to apply available resources on one physical system against being able to do this across multiple systems is where hyper-converged systems can come undone. Again, it might be possible when all available hyper-converged systems are from the same manufacturer and have the same generational specification. However, if different manufacturers and different equipment generations are in use, it's far more likely that the system won't be able to effectively share resources in a fully elastic manner.
Many hyper-converged platforms were touted as having multiple-redundant facilities built in, providing high availability. However, many of the base-level systems don't come with full redundancy, with such capabilities being provided at extra cost. Therefore, all workloads being run on such a platform are at risk should the worst happen and a critical component -- for example, a power supply, network card or storage controller -- fail. Again, with a full clouded platform, this should be less of a problem; live mirrored instances of a workload can be operated on geographically distant parts of the overall platform with data being live synchronized across two or more storage systems.
Other areas that can cause problems include how a hyper-converged system is licensed; is it per system, per workload or per the amount of resources being used?
As with most maturing technology, it isn't that hyper-converged has died. Indeed, it still has a strong part to play within many organizations' overall environments as a dedicated platform for certain workloads. However, the system must operate as a peer partner within the rest of the environment -- having a proprietary platform with its own orchestration, management and alerting capabilities is just asking for trouble.
To recap, the main disadvantages of HCI include:
- Power requirements
- Cloud compatibility