The rising popularity of hyper-converged infrastructure testifies to its many benefits. Yet even HCI is not without its downsides. Many in the industry are now looking to a new generation of IT infrastructures to address these challenges. One of these is composable infrastructure, which offers a platform that's more flexible and can better utilize resources than hyper-convergence, while streamlining operations and potentially reducing costs.
Like HCI, composable infrastructure also comes with challenges, and IT teams ready to upgrade their data centers should understand how composability works and how it compares to converged and hyper-converged systems before making any decisions.
The composable infrastructure provides a software-defined framework for delivering compute, storage and network resources as a set of unified services. It disaggregates hardware components, groups them into logical resource pools and offers them as on-demand services, similar to a cloud platform.
People in the industry sometimes refers to composable infrastructure as infrastructure as code (IaC) or software-defined infrastructure (SDI), but this only tells part of the story. It would be more accurate to call composability an expansion of IaC and a type of SDI, as it incorporates the principles of both to provide a fluid pool of resources that can support specific types of workloads.
A composable infrastructure abstracts the physical server, storage and network components and presents them as flexible services that can be composed and recomposed as needed. It is flexible enough to accommodate applications running on bare metal, in containers or in virtual machines. Intelligent software abstracts the components, manages the infrastructure, dynamically allocates resources and provides the orchestration and automation necessary to deliver the services as efficiently as possible.
The composable evolution
Traditional data center architectures typically employ multi-tiered strategies to host and deliver applications. Virtualization helped to better utilize resources and streamline management, but the multi-tiered approach continued to prevail.
This strategy worked as long as application delivery methodologies remained constant, but newer technologies translated to greater complexity, faster deployments and more dynamic workflows. This resulted in IT looking for ways to simplify operations and better use resources.
Infrastructure as code
Infrastructure as code is a software-based application delivery methodology for automatically provisioning and configuring the infrastructure resources needed to host and run an application. IaC replaces the need for manual intervention when implementing an application, resulting in faster deployments and reduced administrative overhead.
With IaC, developers or managers include infrastructure definition files -- similar to the programming scripts used in data centers to automate IT processes -- along with an application to provide instructions for how to set up resources. The files can include instructions for setting up virtual servers, cloud instances, storage resources, database systems, application packages, testing tools or any number of other components to support application delivery.
The IaC instructions can be imperative or declarative. With the imperative approach, the instructions include specific commands that define how to provision and configure system components. In contrast, declarative IaC defines the desired end state, but does not include the steps necessary to reach that state. In either case, the goal is to ensure you get the same environment whenever the application is deployed.
There are a number of tools that enable IaC delivery, such as Ansible, Chef, Puppet and SaltStack. Many of these tools support both imperative and declarative directives. Organizations can also check the IaC definition files into a source control system alongside the application code. In this way, they have the same controls over their infrastructure definitions as they do their application code. This enables them to revert to previous versions, identify critical changes, compare definitions and carry out other operations.
The IaC approach is well suited to DevOps application delivery because it enables developers and testers to set up the environments they need as they need them, without having to wait for systems managers to provide them with the necessary infrastructure. This also ensures that DevOps teams always work in consistent environments, and have the ability to test those environments alongside the application code.
The converged infrastructure was such a solution, offering a preconfigured package of hardware and software that was quicker to implement and easier to maintain than traditional infrastructures. In a CI appliance, the compute, storage and networking components were physically integrated, with the hardware and software highly optimized for specific workloads, making CI an ideal fit under the right circumstances.
Unfortunately, CI shared many of the same limitations of traditional data center architectures, such as overprovisioning and lack of flexibility in the face of newer application technologies.
The next evolutionary stage in convergence came in the form of hyper-converged infrastructure, which moved from a hardware-centric architecture to a software-defined approach, starting with compute and storage resources and eventually adding software-defined networking. HCI offered greater levels of abstraction and automation, with components tightly integrated into a platform that could be easily scaled.
This proved effective for specific workloads, such as virtual desktop infrastructure, but like earlier architectures, HCI still suffered from overprovisioning and a lack of flexibility. This is what gave rise to composable infrastructure.
Composable platforms can better accommodate changing workloads than HCI, and they can run applications on bare metal, eliminating the hypervisor dependency than can affect application performance. At the same time, composable infrastructure can support container- and VM-based workloads, resulting in more agility than other converged offerings. In addition, its disaggregated architecture translates to better resource utilization and more streamlined operations, while offering greater and more flexible scalability.
Since it is not preconfigured for specific workloads, composable infrastructure can support various types of applications without knowing their configuration requirements in advance. Resources are presented as services and available to the application on the fly, providing a dynamic infrastructure that can accommodate the demands of today's fluctuating workloads. The intelligent software that drives the infrastructure makes it easier to deploy and manage the resources needed to support these workloads.
In addition, unlike HCI, composable infrastructure uses DAS, eliminating the need for a virtualized software-defined storage layer and the latencies that can come with it. Users also can scale storage resources independently of compute and network resources, just like compute and network resources can be scaled independently of each other. Both hard disk drives and solid-state drives are supported, as well as industry standards such as NVMe and PCIe.
A software-defined infrastructure is a data center environment that uses an independent software layer to manage and abstract physical components and then presents them as logical resources to the hosted applications. An SDI environment uses such technologies as software-defined compute, software-defined storage and software-defined networking to deliver resources as services.
In a true SDI environment, software controls all physical resources and requires little-to-no human intervention to carry out operations. The SDI model aims to achieve a high degree of integration and automation, leading to faster deployments, simplified administration and greater flexibility.
The software layer of an SDI can automatically handle such operations as provisioning, configuration and management, based on current application requirements. It can also carry out operations related to security and disaster recovery, including backing up and archiving data.
An SDI implementation requires intelligent software to carry out these operations in a way that optimizes application performance and maximizes resource utilization. The software must continuously monitor the infrastructure and its workloads to assess conditions and ensure the proper orchestration of resources.
More recently, there has been a move to incorporate technologies such as AI and machine learning into SDI software to enhance operations even further, an effort sometimes referred to as the AI-defined infrastructure.
Several products implement SDI. For example, SUSE Manager provides a centralized tool for managing Linux systems across a variety of hardware and virtual environments, including containers and cloud platforms. Nutanix offers Enterprise Cloud, SDI software that can run on Nutanix NX appliances, preconfigured appliances from other vendors or third-party servers such as the Lenovo ThinkAgile HX Series.
Benefits and challenges
Flexibility is one of the big winners with composable infrastructure. Users will find the ability to independently scale components and adapt to changing workloads especially useful for supporting the DevOps pipeline, which relies heavily on continuous integration and delivery.
A composable infrastructure also helps eliminate overprovisioned resources by making better use of hardware while streamlining IT operations. Its intelligent management layer eliminates much of the deployment and optimization overhead that can come with other IT infrastructures, especially when faced with changing workloads. Built-in automation and orchestration help minimize administrative overhead by reducing the need for manual intervention and eliminating many routine tasks. Methodologies that can take advantage of composable infrastructure, such as IaC, can further reduce administrative overhead.
All these factors can help lower infrastructure costs. Because enterprises can allocate resources more quickly and efficiently, application delivery processes become more efficient, leading to further savings. For example, developers and testers on DevOps teams can set up their environments more quickly and easily, resulting in more efficient application delivery and a reduction in overall development costs.
Because of its service-based model, composable infrastructure can be a good fit for private or hybrid clouds. It is also well suited to workloads that require dynamic resource allocation, such as AI or machine learning applications.
Despite these benefits, composable infrastructure is not without challenges. The biggest, perhaps, lies in the fact that it's such a relatively young technology. Although it has made steady progress in the last couple of years, the software that drives composability is still maturing, especially when it comes to disaggregating and composing compute resources, which is limited by current processor capabilities.
In addition, there is still a lack of industry standards that govern composable infrastructure, leaving vendors to determine how to deploy the infrastructure and even how to define it. This lack of standards limits the technology's potential. In theory, it should be able to support commercial off-the-shelf hardware across multiple locations, but such flexibility is still a long way off, increasing the risk of vendor lock-in.
Composable infrastructure is also a more complex system than HCI, so it's not as simple to deploy or maintain. As a result, it requires greater administrative expertise, eliminating some of the cost savings. Steep upfront costs can occur, and, without industry standards, adding to the system can become pricey if you become locked into proprietary equipment.
It's all about the layers
Three primary layers -- physical resources, composing software and the management API -- make up composable infrastructure. The diagram below provides a conceptual overview of how these components work together to deliver composability. The physical layer includes the compute, storage and network resources on which the infrastructure is based.
The composing software abstracts the physical components and organizes them into logical resource pools accessible through the API. The software is programmable, configurable and self-correcting. It can automatically compose the logical resources needed to meet specific application requirements, and it supports the use of templates, which provide predefined configurations for specific use cases. Because the composing software varies between vendors, exact features will also vary, as will the way in which they're implemented.
The management API plays a vital role by facilitating access to the infrastructure's resources. A single interface carries out a wide range of operations, such as searching, inventorying, provisioning, updating and diagnosing. Developers can use the API to programmatically control the infrastructure, and managers can use the API to manipulate any part of the environment. For example, the API enables IaC-based applications to compose infrastructure on demand or for DevOps teams to use existing application delivery tools to automatically provision resources.
Considerations and industry players
IT teams considering composable infrastructures for their data centers need to do their homework. Products often differ significantly between vendors, and the industry is evolving quickly.
Hewlett Packard Enterprise (HPE) has been at the front line of this movement, launching its efforts in 2015. Since releasing its Synergy-based platform, HPE has been steadily improving and expanding its composable capabilities. For example, the vendor now offers composability on ProLiant DL rack servers, as well as Synergy machines. In addition, HPE has introduced a hybrid cloud called Composable Cloud, which makes it possible to build a composable private cloud platform.
HPE is not the only composable player in town. Dell EMC offers its own take on the composable infrastructure with the PowerEdge MX. According to Dell, the platform is the first of its kind to be defined with kinetic infrastructure, a term coined by the company to express the notion of "true composability." Liqid also sells several composable platforms that are built on Intel Optane SSDs. In addition, DriveScale provides a hardware-independent platform for building composable infrastructures, and Intel offers the Rack Scale Design reference architecture for deploying its own form of the composable infrastructure.
What these and other options point to is that there are as many interpretations of what constitutes a composable infrastructure as there are products. As a result, IT decision-makers must evaluate each composable platform individually, compare one against the other, and then determine which best fits their specific needs and budgets.
However, they must remember that the composable approach is still an emerging industry. Not every organization will benefit from the technology, and even if it does, choosing the right offering is no small task. Composability could prove a significant benefit, under the right circumstance, but getting to that point takes both time and effort.
Dig Deeper on Converged infrastructure management
Infrastructure-as-Code series - Tenable: The joy of enforced immutability
IaC security options help reduce software development risk
Infrastructure-as-Code series - Shipa: Bridging the IaC–app disconnect
Infrastructure-as-Code series - Red Hat senior architect: Into DevOps version control 2.0