Containers have evolved from experimental projects to critical infrastructure components within IT ecosystems. Their rapid scalability and agility have made them a desirable technology for years, but only recently have companies found their feet and started to run.
For all their benefits, however, containers can introduce some challenges, such as increased complexity, state management, skyrocketing costs and -- chief amongst them -- a major skills shortage. While organizations can choose from dozens of specialized container management tools to address some of these challenges, it can be difficult to find the right one.
The rise and evolution of containers
In the past few years, containers have risen to prominence and now define the ideal IT environment: fast, highly scalable, agile and theoretically less resource intensive.
According to a "Voice of the Enterprise" study conducted by analyst firm 451 Research, more than 58% of respondents within DevOps organizations said they are actively adopting containers. Another 31% are in either the proof-of-concept stage or planning trials, said Jean Atelsek, research analyst at Boston-based 451 Research.
Meanwhile, Enterprise Strategy Group (ESG), another analyst firm based in Boston, asked organizations where their production applications and workloads run, both on and off premises. ESG's report, entitled "Application Infrastructure Modernization Trends Across Distributed Cloud Environments," showed that VMs still host around 25% of workloads in both locations in 2022. Containers, however, have an adoption rate of 17% on premises and 21% in cloud services, and are expected to overtake all other deployment types by 2024, at 26% and 27%, respectively.
Organizations choose to build new applications, tools and services on or for microservices and container infrastructure because of the portability, agility and scalability benefits, as well as the potential for rapid-fire deployment patterns. Previously, IT teams needed to plan their monolithic code deployments carefully, months in advance, but containers and microservices now enable them to instead deploy small updates on a rolling basis -- without causing disruption to user access and experience.
These modern application deployment models also enable IT teams to more easily inject AI and machine learning algorithms into their software. This significantly reduces the grunt work in IT operations -- known more formally as toil -- which frees up hands and minds from repetitive, basic tasks for more interesting or productive projects.
"People don't really want to manage infrastructure anymore. They just want to build their applications and have them run," Atelsek said.
Several vendors have stepped up to that change. Hyperscale cloud providers, for example, now offer services such as Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS) and Google Kubernetes Engine (GKE), which reduce the infrastructure management overhead associated with container and Kubernetes deployments.
"The hyperscalers are packaging ... engineering solutions, so that it's just much more of a turnkey operation," Atelsek said.
The container migration and management challenge
Many well-established companies have built up large, stable application environments with major install bases and several million lines of legacy code. Now those companies find it increasingly difficult to compete with smaller, born-in-the-cloud companies built on flexibility and speed.
"The economics of the new model is so compelling that the decision to not move forward with containers and microservices is akin to the decision to go out of business," said Janae Stow Lee, consultant at Evaluator Group, an IT analyst firm based in Boulder, Colo.
But it's not an easy transition to make. Legacy monolithic applications -- those with millions on millions of lines of code, in particular -- are difficult to break into microservices and containers, both technologically and in terms of staffing. Even after a migration, it can be difficult to find staff with the necessary skills to manage a container deployment -- and equally difficult to build those skills in house. A cloud platform can relieve some of that pressure, but it won't eliminate it. Invariably, poor planning will result in a disaster.
"The danger of replicating your on-premises mess in the cloud is very real," Atelsek said.
Not only is container management a total technological shift, but it's a drastic cultural change as well. Implementing both organizational and technological shifts concurrently is among the tallest -- and most important -- hurdles to clear. Containers, microservices and cloud architectures can only benefit a company if it implements and manages them efficiently. And while there's a growing number of them on the market, choosing the right container management platform for your needs isn't always easy.
For example, while managed container and Kubernetes services, such as EKS, AKS and GKE, can simplify deployments, there are potential tradeoffs.
Jean AtelsekResearch analyst, 451 Research
"Buying an all-in service makes [container management] easier because more of that integration and abstraction work is provided," Stow Lee said. "The downside of doing that, of course, is you turn the keys of your IT kingdom over to someone who may not always have your best interests at heart." And as time passes, that cloud provider's priorities, hardware requirements, or security and governance policies might not align with a business's needs.
Determine if the simplification of an all-in-one managed service is worth the loss of access and control over the back end. Weigh that option against the complexity of managing it yourself to retain that access.
These tough decisions and tradeoffs, coupled with the rise in container adoption in general, prompted Evaluator Group to expand its focus beyond general IT management consultancy and into the container management space. Specifically, the analyst firm looks to guide clients toward the services and platforms that would best suit their requirements.
From a series of interviews with company executives to determine primary concerns and starting positions, Evaluator Group built an interactive tool called 2022 EvaluScale Insights for Container Management Systems for its clients to evaluate container management offerings based on the features they need most. The first iteration of the EvaluScale Insights tool includes comparisons for container management services and container management platforms.
"Over the past couple of years, there have been many requests from [our] subscribing enterprise customers asking us to provide more guidance, so the partners decided to actually invest more time and energy in the space," Stow Lee said.
Addressing the skills shortage
The IT industry has suffered from a widening skills gap for several years -- a gap that continues to grow as IT systems become more complex. Neither organizations nor analysts can predict its resolution or end, but the future of containers hinges on this issue.
"Container adoption is growing across the board ... but the skills for configuring and orchestrating containers are in short supply," Atelsek said. Furthermore, those organizations with massive monolithic architectures will find it difficult to attract skilled IT staff, compared with cloud-native organizations that "move fast and break things."
As infrastructure, platforms and applications grow in complexity, many traditional entry-level IT positions have been automated away in the name of reduced manual labor and guaranteed uniformity. Thus, many new graduates face a job market in which, from their perspective, the starting line keeps moving away -- the minimum experience requirements seem impossible to meet in academic environments that are behind the industry.
Not only are organizations suffering from a lack of available skills, but aspiring IT professionals don't necessarily have access to the types of technologies for which IT organizations are hiring. Entry-level job positions must be fully revamped for a modern IT industry.
"When skills are scarce and processes need to be changed, distributing the decisions around that [issue] makes the problem worse, because instead of building a set of skills, you're trying to build multiple sets of skills in multiple places," Evaluator Group's Stow Lee said. "You're not able to be efficient about how you leverage the expertise."
That said, not all is lost when it comes to the future of the container ecosystem skills gap. Several vendors, such as Red Hat, have been working toward creating easier-to-use interfaces that don't require a high level of Kubernetes expertise. These interfaces enable users to navigate and interact with their environments via a series of point-and-click sequences, rather than with the web of APIs behind the scenes.
"[Enterprises are] bringing in people who don't have the depth [of knowledge] in Kubernetes," said Rob Strechay, senior analyst at ESG. "As more people have to support Kubernetes, and containers [prompt] the need for good customer experience, the usability has to significantly go up."
The advent of ClickOps
ClickOps is an unofficial term that encapsulates the trend of self-service portal development and dedicated steps toward improved usability in containerization technologies. In particular, the advent of a centralized dashboard enables IT admins to access, manipulate and control data and insights from one location, rather than cycle through a series of dashboards to accomplish tasks that need multiple data sources.
ClickOps isn't necessarily a new play on DevOps -- instead, it's a drive toward comprehensive self-service that reduces complexity and IT admins' interaction with background APIs. It emphasizes a simple and clear front end that leaves API management behind the scenes with developers. ClickOps, at its heart, is an extension of the shift-left endeavor championed by DevOps.
The advent of point-and-click operations, and the simplification of user interfaces and experiences, reduces complexity for IT professionals outside of the dev department. The reduction in mandatory tool experience should help close the skills gap, but does not negate the need to understand what you're clicking on and what it will do.
In addition to improved, simpler UIs, cloud vendors are creating standard configurations to reduce the setup work. Hyperscale cloud providers, in particular, are responding to the skills gap by packaging container runtimes into platforms that have preset defaults. "That takes the onus off of the user for the fiddly configuration settings that are necessary to orchestrate a container-based application or a container-heavy environment," Atelsek said.
Cost is a cosmic object
Initially, containers were touted as a roadmap to grand cost savings, but time has proven that isn't necessarily the case.
While there's a prevalence of open source technologies in the containerization market, most are safer to use when accessed through a service provider that can offer support services as well as trusted integrations with other tools. But these add-ons, specializations and support cost money.
"The notion of 'swipe a credit card and get compute and storage capacity' was great five years ago," Atelsek said. "When your expenses [get] into the millions of dollars per month for operating that infrastructure, the CFO and the CIO -- and also ... security -- aren't going to let that happen anymore."
Runaway costs have been a common, and growing, theme in the container market over the past two years especially, fueled by the global, economy-shaking pandemic that forced companies to accommodate a wholly new work model. Many cloud-based models cracked under the pressure, and companies saw their cost of operation skyrocket. But in 2022 -- and especially moving forward -- those numbers are far from sustainable.
Enterprise Strategy Group (ESG) is a division of TechTarget.