markrubens - Fotolia

Google Cloud CTO talks Cloud Services Platform hybrid play

Google Cloud CTO Brian Stevens envisions an IT landscape where enterprises can more easily develop and manage apps that span on premises and the cloud. Hint: Kubernetes plays a big role.

Like its chief competitors, AWS and Microsoft Azure, Google this year has pushed to simplify hybrid cloud deployments for its enterprise users.

Google's strategy -- unveiled in July -- comes in the form of Cloud Services Platform (CSP), a hybrid cloud offering that brings the vendor's managed Kubernetes service and the open source Istio service mesh, jointly developed by Google, to enterprise data centers.

At the recent Google Cloud Summit in New York City, Google Cloud CTO Brian Stevens revealed more details of Cloud Services Platform's technical components and go-to-market model, as well as the role Google expects to play in an increasingly multi-cloud world.

Why now has Google put such a focus on hybrid cloud?

Brian Stevens: It's more the maturity of Kubernetes. There really isn't that problem to solve [anymore] around VMware and virtualization. Environments in the enterprise are 90% virtualized already. VMware has already done that.

Now that Kubernetes, four years in, has reached this maturity level, there is this groundswell of people not just moving to Kubernetes-based platforms in the public cloud, but also trying to figure out how to do that on premises. So, the more that we can help them [with that], the more value it is to them.

How, exactly, will Cloud Services Platform enable enterprise hybrid clouds?

Brian Stevens, CTO, Google CloudBrian Stevens

Stevens: The fastest growth for GCP [Google Cloud Platform] is people adopting GKE [Google Kubernetes Engine]. The majority [of deployments are] still straight virtual machines, but the fastest-growth area within that is people going into managed Kubernetes environments.

Even our existing customers ... as they are teaching their developers how to build modern apps and deploy them on the cloud, they are not shifting 100% of everything they do. Even if it's Kubernetes-based, the way we do it is different [from] the way Red Hat does it [or] Azure Stack [or] Pivotal.

They have some common underpinnings. But, for us, what we want to do is give them a fully managed environment on premises, so it looks like they are really running Google [public] cloud on premises. If they are building an app on top of that platform, it can run in either place.

Is Cloud Services Platform geared more toward modern hybrid cloud apps, or do you expect customers to refactor legacy apps for this platform?

Stevens: Anybody who is writing new applications is typically going to container-based architectures. Giving them tools that allow them to take VM-based applications and convert them [into] container-based applications, that's really helpful for them. It's also a more efficient way for them to test applications, because everything is so much lighter-weight. You use a lot less infrastructure when you are building your applications in a container-based world.

It typically means, though, that they are changing their development environments. They are all using open source tools, increasingly, to build their development environments and do things like CI/CD and testing. [Cloud Services Platform] allows that to be very consistent.

What are the hardware requirements to run Cloud Services Platform?

Stevens: What you want with that is industry-standard hardware. Right now, we did the quickest thing, which is that ROI dependency [is based on enterprises] running VMware underneath it. So, anywhere they have VMware, we can run CSP on top. Ultimately, you would probably look to a model where you'd work directly on different OEM solution stacks and work directly with OEM partners.

How does your Nutanix partnership play into this?

Stevens: Nutanix has a really curated stack. They have their own custom hypervisor and ... it's a really great software stack, integrated with hardware, that now integrates directly back into the cloud. What we've done with them is kind of the reverse: 'How do you get the Nutanix stack that people like on premises to be more consistent with an environment that runs on GCP?'

What's the go-to market model for Cloud Services Platform, particularly the on-premises component?

Stevens: In the past, [hardware OEMs] were all doing converged, which gives them virtualized capabilities in their hardware. But by working with Kubernetes, they'll be able to have container-based architectures as a solution stack within their hardware.

So, there are a lot of opportunities to work with solution partners and OEMs. Right now, we [have a] small number of customers [and] work really closely with them in engineering to get it right. Once we get it right, then we'll talk about scaling it across a broader set of partners.

Will Google build its own on-premises hardware for enterprise data centers, as a report last week claimed?

Stevens: I don't have any comments to make on that, because we certainly haven't announced anything around cloud. So, that's kind of a TBD.

How many GCP customers also use other public clouds, like AWS or Azure? And what steps have you taken to accommodate a multi-cloud model for them?

Stevens: More than 50% are multi-cloud, and multi-cloud now means three [providers].

What we're doing is an open-source-first strategy. What happens is that it's really hard to try to drive standardization across competing agendas. You need all willing partners to want to come in and standardize APIs and management and all the stuff that kind of gets in your way -- all the gratuitous differentiation. It's really difficult.

More than 50% [of GCP users] are multi-cloud, and multi-cloud now means three [providers].
Brian StevensCTO, Google Cloud

The best vehicle, in my view, to drive standardization is open source. If you can create open source projects that are super valuable, it really forces the industry, even the competitors, to adopt them -- even if it's not part of their strategy.

That's why, with Kubernetes, we said, 'Containers are the way to go with cloud, not just VMs or a VMware environment,' because we knew that internally from our own architecture. And then, once Kubernetes got robust enough, you saw it adopted by the other two major public clouds.

This year, serverless, AI and containers dominated the cloud conversation. What's next?

Stevens: People thought virtualization was great, and then it was containers, and then Kubernetes. The container thing was easier for people to get their arms around. Developers can write little code modules and don't have to worry about infrastructure, but just deploy them. [Kubernetes] allows our infrastructure to be efficient, because we can use containers instead of heavyweight virtual machines.

But what allowed us [internally] to be resilient, to scale and to really implement robust security ... was Istio and service mesh. All the networking and security functions happen in front of the app, so organizations can start to think about things as services, because people don't really want to manage apps; they want to think about their world as services.

Service mesh is going to change the ability for enterprises to secure their network and build distributed networks that extend from on premises to cloud and roll out new services without risk. You can shift some users to it, and if it doesn't work quite right, you can roll it back or shift more traffic. It gives them all those controls to really run and move fast and do that in a way that's safe. But that's a multiyear journey.

Dig Deeper on Cloud deployment and architecture

Data Center
ITOperations
SearchAWS
SearchVMware
Close