Melpomene - Fotolia
Enterprises are presented with many challenges when they move to the cloud -- not the least of which is wading through the multitude of services to find the ones that will actually benefit their businesses.
Cloud providers churn out so many services these days that the race appears to have shifted from which one can offer the cheapest services to which one can offer the most services. AWS executives boasted about having more than 175 services at their 2019 user conference, while Microsoft and Google list an even higher number of services in their respective product directories.
The variety of cloud services only makes it harder to choose the right ones as part of a cloud evaluation. There's the staples of the public cloud, such as compute, networking, storage and databases; then there's the host of other IT sectors these vendors continue to encroach upon, whether it's AI, security or even esoteric categories like satellite communications. And within each of those groupings, there can be a dozen services to pick from, each with its own set of unique features.
"The increasing volume of services is definitely a challenge for pretty much all of my clients," said Sean Feeney, cloud practice director at Nerdery, an IT consultancy. "It could take an entire career to keep up with the actions of the cloud providers, much less implement them to add value to your business."
The cloud services deluge and the problems it presents
As part of its Cloud Price Index, 451 Research tracks 2 million product line items across AWS, Microsoft, Google, IBM and Alibaba. The total number of items it tracks doubled in 2019.
"It's the Wild West out there," said Owen Rogers, research vice president at 451. "As soon as an enterprise has made a commitment to a development, a new technology comes along which might be a better fit."
To illustrate the conundrum enterprises face when they try to keep pace with the cloud release cycles, Rogers used an example of a company that moved all its applications to cloud VMs a few years ago, only to see a raft of container-based services come along after the fact. The diversity of options can help add value and build more powerful apps, but it also creates headaches as IT teams figure out the best combination of services.
Another part of the problem is enterprises aren't fully aware of what's out there now, so they're missing opportunities to save money. For example, an IT team could move its on-premises database to SQL Server hosted on AWS as a cost-saving measure, but it might miss the fact that it could have saved as much as 80% on migration expenses if the team did it with AWS Database Migration Services, said Jeff Valentine, CTO at CloudCheckr, a cloud management platform provider.
"It's information overload," Valentine said. "No one can keep up with the constant barrage of changes."
Experimentation is critical to a cloud evaluation
Despite the bevy of options, enterprises shouldn't feel inundated to the point of paralysis. It's important to see what tools and services fit. Start by sorting out the core public cloud services and features you want to use, Feeney said. This could include VMs, containers or a PaaS platform.
"Whatever your base model is, that's going to be your biggest chunk of spend," he said. "All these other services they roll out with are often just iterative versions or features of an existing service, or they're very vertically aligned services."
Avoid services tailored to an industry you're not a part of, but otherwise encourage your developers to experiment, Feeney noted. And they should do so continuously. In the past, the bulk of the major product rollouts or updates would coincide with a vendor's primary user conference, but that's not the case today. Instead, cloud providers push out updates and services almost weekly.
"Build safe sandboxes to try these new services and find out if it improves workflow, saves cost or benefits [your system] otherwise," Feeney said.
Jeff ValentineCTO at CloudCheckr
And that experimentation should be a grassroots initiative, led either by developers or lines of business, Feeney said. A top-down approach won't work because company leaders often only catch the biggest announcements and are less likely to see something minor that might be highly valuable to the business.
Analyst firm IDC is actively encouraging its clients to experiment with new services and to have a process in place to ensure it's done regularly. While testing IT infrastructure on-premises often involves long, drawn out RFPs and beta tests, it's much more cost effective to evaluate public cloud services with a small pilot program, said Deepak Mohan, an IDC analyst.
"When a new database type gets announced, it literally costs you a few dollars to quickly spin it up and try it out," he said.
And the need to experiment is paramount today as organizations move beyond lift-and-shift and transition to different -- and often unfamiliar -- compute paradigms in the cloud. Companies will get the most benefit from the public cloud if they modernize their workloads and move closer to the 12-factor application model, Feeney said. The only way that happens is if IT teams dig into these services and find out how the tools can help them optimize their IT footprint.
"The optimization of cloud a few years ago was about working out how to squeeze costs on infrastructure," Rogers said. "The optimization of today is about helping enterprises constantly take advantage of new services to add value to the business, not just squeezing costs."
Large enterprises and the limits of experimentation
Of course, some companies can only tolerate so much risk. CloudCheckr has large enterprise clients that spend millions of dollars each month on the public cloud across dozens of departments and projects. For those companies, experimentation leads to fear and anxiety, because the wrong move, no matter how well intentioned, could run afoul of data privacy laws or create other issues that result in dire consequences for the business.
"They have hundreds of people in their organizations that could [make a mistake that accidentally] sends their CEO to jail or puts them on the front page of The Wall Street Journal," Valentine said.
Leaders at these companies don't want to give autonomous control at the department level because of a lack of governance and consistency, Valentine said. If different teams experiment across clouds, it could lead to service sprawl and confusion, and limit visibility across workloads.
An emerging trend among these types of companies is what is known as "cloud centers for excellence." There isn't a standardized approach to creating and implementing these centers, but the goal is to create cross-functional, centralized oversight over IT resources in the cloud. Input comes from stakeholders across a business, but it can also involve cloud providers and outside consultants. These cloud centers can provide guidance on new projects, so developers and lines of business comply with governance policies around cost, security and consistency.
However, enterprises shouldn't be draconian, Valentine said. Instead, put the right guardrails in place so employees can have some freedom while still complying with corporate security and compliance standards. That way, IT teams can confidently move fast and make important changes to get the most out of their cloud environments.