zhu difeng - Fotolia
Storage deployed inside containers can help enterprises of all types and sizes bring efficiency and cost savings to a critical IT operation.
By containerizing storage services under a single management plane, such as the Kubernetes open source container orchestration system, administrators can save time and concentrate on more important tasks. Containerized storage also enables organizations to run their applications and storage platform on the same server infrastructure, reducing infrastructure costs.
Containerization takes enterprise storage to the next level by providing integration with Kubernetes and being able to present persistent storage to a container framework on a target private cloud platform, said Doug O'Flaherty, marketing director of IBM's storage unit. "What differentiates superior storage for containers is how well this integration is done."
Multiple containerized storage choices
Organizations getting started with data storage containerization can choose from several, frequently confusing approaches. NFS is the easiest choice. It supports both OpenStack and bare metal-provisioned Kubernetes deployments, and it provides the greatest flexibility and dynamic provisioning abilities in Kubernetes, said Cameron Seader, technology strategist at open source software provider SUSE. "This is the best way to get started with container storage because not all Kubernetes versions are equal in the support of available storage classes," he noted.
Organizations just getting started with containerized storage are typically trying to find the best way to transition storage to a new infrastructure model using either current or legacy storage systems. Addressing this need, Kubernetes features an abstraction layer, enabling vendors to develop their own plugins for use with their proprietary systems.
"That means, if you have EMC arrays, you can use the EMC plugins; if you have NetApp arrays, you can use NetApp plugins and so on," said Tim Curless, DevOps solutions principal at cloud consultancy Ahead. "These plugins also allow a great degree of separation of duties between developers, DevOps engineers and storage engineers."
Since containers are designed to be moved around to different environments, the data manager should be involved in their creation to help with persistent data storage. "The data manager needs to know the data requirements for each containerized application and ensure there is a process in place to prevent data from being overwritten," said Sue Clark, CTO architect at Sungard Availability Services. "They also need to know the security and access requirements associated with each container so the correct approach to data storage can be achieved."
Thanks to steady growth in storage vendors' adoption of Container Storage Interface (CSI), organizations don't have to worry about choosing a nonstandard storage solution. The availability of CSI-compatible storage products is steadily growing, making vendor lock-in less of a concern, said Kiran Chitturi, CTO architect at Sungard Availability Services.
Key areas to consider
When assessing which container management service is the best fit, organizations must first determine their own maturity, goals, staff expertise and level of anticipated vendor-provided support.
Tim CurlessDevOps solutions principal, Ahead
"Before diving into containers, you should start by having a complete understanding of your application, as this can help you choose from a range of container storage solutions," Chitturi said. "Stateful applications are not as straightforward as stateless ones." When considering persistent storage for a containerized application, it's important to also look at systems for managing state, Chitturi added.
One of the key qualities needed when getting started with containerized storage is dynamic provisioning, which can be handled through a DaemonSet that monitors the mapping of volume claims to a pod in an automated fashion. "Having to do this by hand for each volume and pod gets a bit cumbersome," Seader noted.
A container storage platform should also scale intelligently, balancing resources, usage and optimal performance to accommodate the complex transactions and data-intensive operations that are inherent to AI, deep learning, machine learning and other demanding applications, said Uladzimir Sinkevich, Java enterprise architect at ScienceSoft, a software engineering and IT services company. "The container storage [platform] should also easily integrate with your orchestration solution ... which will simplify DevOps activities," he added.
Containerized storage takeaway
Even though the choices can be confusing, it's not necessary to spend months or years overanalyzing the various choices for containerized storage. "Just about any storage provider in your current data center or cloud will provide some level of capability in a Kubernetes or containers platform," Curless said. "Use those existing systems to get started quickly."
Migrating container-based applications, even stateful applications, to different storage platforms after deployment is far easier than moving traditional storage migrations. "Don't be afraid to jump in and experiment before moving to production," Curless said. "This pattern of iteration will provide additional experience and, ultimately, the best solution for you."
Why backup for container apps is important
How Kubernetes CSI works and is being used
What you need to know about cloud container deployment models
Dig Deeper on Storage architecture and strategy
containers (container-based virtualization or containerization)
Compare container orchestrators Apache Mesos vs. Kubernetes
How to manage stateful containers with Kubernetes
The pros, cons and challenges of containerized microservices