Google announced a series of generally available feature updates today for its Cloud Bigtable database service, designed to help improve scalability and performance.
Google Cloud Bigtable is a managed NoSQL database service that can handle both analytics and operational workloads. Among the new updates that Google is bringing to Bigtable is increased storage capacity, with up to 5 TB of storage now available per node, an increase from the prior limit of 2.5 TB. Google is also providing improved autoscaling capabilities such that a database cluster will automatically grow or shrink as required based on demand. Rounding out the Bigtable update is enhanced visibility into database workloads, in an effort to help enable better troubleshooting of issues.
Adam RonthalAnalyst, Gartner
"The new capabilities announced in Bigtable demonstrate ongoing focus in increased automation and augmentation that is becoming table stakes for modern cloud services," said Adam Ronthal, an analyst at Gartner. "They also further the goal of improved price and performance -- which is rapidly becoming the key metric to evaluate and manage any cloud service -- and observability, which serves as the basis for improved financial governance and optimization."
How autoscaling changes Google Cloud Bigtable database operations
A promise of the cloud has long been the ability to elastically scale resources as needed, without requiring new physical infrastructure for end users.
Programmatic scaling has always been available in Bigtable, according to Anton Gething, Bigtable product manager at Google. He added that many Google customers have developed their own autoscaling approaches for Bigtable through the programmatic APIs. Spotify, for example, has made an open source Cloud Bigtable autoscaling implementation available.
"Today's Bigtable release introduces a native autoscaling solution," Gething said.
He added that the native autoscaling monitors Bigtable servers directly, to be highly responsive. As a result as demand changes, so, too, can the size of a Bigtable deployment.
The size of each Bigtable node is also getting a boost in the new update. Previously, Bigtable had a maximum storage capacity of 2.5 TB per node; that is now doubled to 5 TB.
Gething said users don't have to upgrade their existing deployment in order to benefit from the increased storage capacity. He added that Bigtable has a separation of compute and storage, enabling each type of resource to scale independently.
"This update in storage capacity is intended to provide cost optimization for storage-driven workloads that require more storage without the need to increase compute," Gething said.
Optimizing Google Cloud Bigtable database workloads
Another new capability that has landed in Bigtable is a feature known as cluster group routing.
Gething explained that in a replicated Cloud Bigtable instance, cluster groups provide finer-grained control over high-availability deployments and improved workload management. Before the new update, he noted that a user of a replicated Bigtable instance could route traffic to either one of its Bigtable clusters in a single cluster routing mode, or all its clusters in a multi-cluster routing mode. He said cluster groups now allow customers to route traffic to a subset of their clusters.
Google has also added a new CPU utilization by app profile metric, that enables more visibility into how a given application workload is performing. While Google has provided some CPU utilization visibility to Bigtable users prior to the new update, Gething explained that the new update provides new visibility dimensions into data query access methods and which database tables were being accessed.
"Before these additional dimensions, troubleshooting could be difficult," Gething said. "You would have visibility of the cluster CPU utilization, but you wouldn't know which app profile traffic was using up CPU, or what table was being accessed with what method."