Google's cloud courtship of large enterprises continues, with a pricing option for Google Compute Engine that ties into emergent demand for workloads such as AI, machine learning and graphic processing.
Google has added GPUs, Cloud Tensor Processing Unit (TPU) Pods and local solid-state drives (SSDs) to its committed use discount plan. This gives customers discounts on Google Cloud resources, compared to on-demand pricing, in exchange for a one- or three-year commitment. Prices for the GPUs, TPU Pods and local SSDs -- which are geared for AI and machine learning workloads -- ring up at a 55% discount, versus up to a 70% discount on other resource types through the committed use program, according to a blog post.
All GPU and TPU flavors in every Compute Engine region are eligible for the committed use discounts, which are similar in makeup to the likes of AWS Reserved Instances. However, the price implications for each of those vary, which customers should closely consider as they make purchase decisions.
Meanwhile, Google has also introduced support capacity reservations, which let customers reserve cloud resources inside a particular zone for later use. These are aimed at scenarios in which customers know workloads will rise dramatically at certain times of the year, such as holidays, Google said. Support capacity reservations are billed like regular VMs, which means existing discounts customers have -- including for committed use -- will get applied.
Google Cloud seeks to differentiate through simplification
Owen RogersAnalyst, 451 Research
This is a significant year for Google Cloud's ambition to win more large enterprise deals and keep up with AWS and Microsoft in market share. Among Google's recent efforts to differentiate itself is Anthos, a Kubernetes-based software platform geared toward multi-cloud and hybrid deployments that may not even involve Google's own compute infrastructure, but rather other public clouds or on-premises systems.
Still, Google Cloud is happy to sell customers plenty of compute cycles and wants to do so with transparency, as cloud pricing becomes increasingly complicated, said Paul Nash, director of product for Google Compute Engine.
"It's one of the biggest things customers spend time thinking about," he said. "You shouldn't have to be a finance expert in order to use the cloud and know you're getting what you need."
The expansion of committed use discounts to AI- and machine-learning-centric resources is now in beta, with general availability set for later this year.
Google's move reflects an evolution in enterprises' cloud consumption, particularly in a market where multi-cloud and hybrid present fresh approaches to projects, but also stand to add complexity.
Customers want to ensure they have enough capacity to guarantee performance of their most vital applications, in a manner that both meets their budget and provides the lowest cost possible, said Owen Rogers, director of the digital economics unit at 451 Research. For example, on-demand pricing sounds great in practice, because users can experiment for just a few cents an hour. "But most enterprises running production workloads still like to know where they stand," he said.
As a result, it's fairly standard for cloud providers to offer predictable costs for virtual machines in conjunction with on-demand pricing for times when enterprises need to top-up their capacity, Rogers said.
Google's announcement is interesting, because it extends the predictable model beyond the typical virtual machines into the realms of machine learning and high-performance applications, he said. Google's customers can now ensure they have guaranteed capacity, at a guaranteed and predictable price point, for applications such as graphics rendering, machine learning and analytics.
"Considering much of Google's messaging is focused on big data, this is a good addition to its overall proposition," Rogers said.