kantver - Fotolia
The converged vs. hyper-converged infrastructure debate is only a few years old, but its history goes back to the 1980s. That's when IT vendors started ensuring their products were able to work with products from other vendors, testing them in places such as the University of New Hampshire's InterOperability Lab. From that stage, it was an easy step for resellers to start packaging integrated components that they knew would work well together in a single IT system made up of compute servers, storage systems and networking devices.
That integration evolved into converged infrastructure, in which the vendors themselves began selling IT components as reference architectures or packaged bundles. From that point, it was only a matter of time before hyper-converged infrastructure (HCI) -- a completely integrated set of components that combines virtualization, compute and storage resources in one chassis -- was born. Inevitably, that gave birth to the converged vs. hyper-converged infrastructure debate among IT administrators.
Organizations and enterprises continue to debate the pros and cons of converged vs. hyper-converged infrastructure to determine which method of IT architecture will best meet their needs. The following quiz will help IT administrators hone their knowledge on the differences between the two approaches to IT systems.