Brian Jackson - Fotolia
Pros and cons of composable architecture vs. traditional storage
If traditional storage isn't meeting your organization's needs, composable infrastructure may be the answer. Get to know the pros and cons of both approaches.
Traditional storage architectures such as NAS or SAN have supported a range of workloads over the years, and they continue to do so. However, massive growth of increasingly diverse data, coupled with modern application technologies and workflow complexities, have stretched these architectures to their limit.
To address today's storage needs, many IT teams are deploying storage based on composable architecture, abstracting storage and other physical resources and delivering them as services. But before organizations go this route, they should understand the pros and cons of traditional storage vs. composable infrastructure.
Traditional NAS systems
Storage in most data centers is typically implemented as NAS, SAN or DAS. Each has its own advantages and disadvantages and should be evaluated individually when comparing traditional storage to composable infrastructure. That said, these three architectures share characteristics that are useful to understand when considering a move to composability.
In a NAS configuration, multiple users and applications access data from a shared storage pool on the LAN. NAS is easy to deploy, maintain and access and, like SAN, it comes with built-in security, fault tolerance and management capabilities. However, NAS is generally cheaper than SAN.
One challenge with NAS is storage requests must compete with other network traffic, leading to potential contention issues. The alternative is to implement NAS on its own private network, but this approach can result in the need for more maintenance and higher costs. And even on a private network, too many concurrent users can overwhelm the storage drives. In addition, entry-level NAS systems have limited scalability. Although high-end NAS is more scalable, even it is limited when compared to SAN.
IT teams looking for more extensive storage often turn to SAN, a dedicated, high-performance network that interconnects multiple storage systems and presents them as a pool of storage resources. The storage devices share the network with application servers that run storage management software and control data access. A SAN offers high availability and scalability, along with failover protection and disaster recovery.
Despite their extensive use, SANs aren't without challenges. They can be difficult to deploy and maintain, often requiring specialized skills. Although these factors alone are enough to drive up costs, SAN components can get pricey, too. In addition, SANs seldom meet performance expectations, in part because of their complexity. However, SSDs have gone a long way to improve SAN performance.
The direct-attached option
Both NAS and SAN rely on network connectivity, which can affect performance even under the best circumstances. For this reason, some organizations use DAS for more demanding workloads. DAS is easier to implement and maintain than either NAS or SAN, and it includes a minimal number of components, all factors that make DAS cheaper. DAS might lack advanced management capabilities, but applications such as Hadoop and Kafka manage storage themselves, so management isn't always an issue.
The bigger concern with DAS is that it can't be pooled and shared like NAS and SAN. It also has limited scalability. The result is an inflexible, heavily siloed storage environment, often leading to overprovisioned and underutilized resources. But this rigidity isn't unique to DAS.
With all three architectures, their inherent structures are fixed and difficult to change, each existing in its own silo. It's no small task to modify configurations or repurpose equipment to meet the fluctuating workload requirements of modern applications. For that, you need storage resources that are fluid and flexible and can support automation and resource orchestration, something traditional storage can't do on its own.
Storage and the composable infrastructure
A composable infrastructure abstracts storage and other physical resources and delivers them as services that can be dynamically composed and recomposed as application requirements change. The composable infrastructure supports applications running on bare metal, in VMs and in containers. Third-party tools can interface with the infrastructure's API to dynamically allocate the pooled resources to meet specific application requirements, making it possible to support a high degree of automation and orchestration.
In a composable infrastructure, storage resources remain separate from other resources and can be scaled independently of them. Storage is allocated on demand and then freed up when no longer needed, making it available for other applications. Composable software handles these operations behind the scenes, without requiring administrators to reconfigure the hardware. In addition, a composable infrastructure can incorporate DAS, NAS or SAN systems into its environment, as part of its pool of flexible storage capacity.
Because a composable infrastructure isn't preconfigured for specific workloads, it can support various applications without needing to know configuration requirements in advance. This approach results in greater flexibility and resource use than traditional storage. Composable architecture also simplifies operations, speeds up deployments, minimizes administrative overhead and promises almost unlimited scalability. Storage resources can be allocated when they're needed and for as long as they're needed.
As good as all this sounds, the composable infrastructure also has challenges. It's a young technology, and the software that drives composability is still maturing. There's also a lack of industry standards or even a common definition of what composability means. Vendors define and implement composable infrastructure systems according to their own rules, which can result in vendor lock-in and potentially cause integration issues.
A composable infrastructure is complex and can be difficult to deploy and manage, often requiring additional expertise. For many organizations, the disaggregation of a composable architecture will require a shift in thinking that realigns business to the new methodology. In the traditional data center, applications are developed, tested and deployed as discrete operations, with resources assembled in a piecemeal fashion. In the modern data center, the application lifecycle is a unified effort that incorporates continuous integration and delivery, along with automated resource allocation, making it well suited to the composable architecture. Without this shift, IT teams risk creating another storage silo.
Despite these challenges, the composable infrastructure can still benefit a variety of workloads. For example, AI and machine learning often require dynamic resource allocation to accommodate processing operations and the fluctuating influx of data. DevOps processes, such as continuous integration and delivery, can also benefit from the composable infrastructure, especially when used in tandem with infrastructure as code. In the same sense, IT teams that want to automate more of their operations could also benefit from composability.
In fact, any organization running applications that have unpredictable or continuously changing storage requirements should consider the composable infrastructure. That's not to say there's no place for traditional storage. Organizations that run workloads that have fairly stable requirements and don't require continuous reconfigurations or resource reallocations might be fine with the traditional approach.
Moving to the composable infrastructure
Traditional storage systems weren't designed for today's modern applications. As those applications become more dynamic and data sets grow larger and more diverse, IT teams will be scrambling to accommodate them.
Composable architecture could prove an effective approach to handling the complexities of today's storage management. However, the technology is young and has a long way to go before it can deliver on its promise of true composability across all commodity hardware. Even so, more vendors than ever are adopting the composable model. Perhaps the biggest question IT teams should ask when considering a composable architecture is whether they're ready to move to a new way of thinking about infrastructure and allocating storage resources.
Features distinguish leading composable infrastructure systems
Evolution in action: How datacentre hardware is moving to meet tomorrow's tech challenges