Sergey Nivens - Fotolia

Tip

Implement Kubernetes for multi-cloud architecture security

Uncover how orchestration tools benefit multi-cloud environments, and get help selecting the right deployment model for Kubernetes in multi-cloud architectures.

Securing cloud workloads becomes more complex in multi-cloud environments for three major reasons. First, security features vary from provider to provider; Azure Sentinel is distinct from AWS CloudTrail, for example. Second, implementing security policies -- such as how to access logs, what types of data are logged or portal administrative access to resources -- differs across providers. Third, operational security tasks depend on the provider as well to account for nuances in service offerings. Thus, lock-in can occur when underlying security services are used directly by workloads.

To reduce these challenges, IT leaders can incorporate orchestration into the organization's multi-cloud strategy. Kubernetes, a popular open source container orchestration system, can be used to manage cloud workloads and provide a layer of abstraction between a cloud provider's native security services and its customers' security policy goals. In some cases, multi-cloud orchestration tools can also reduce lock-in by enforcing standardization across how security services are accessed and used. Plus, under the right circumstances, each of these use cases corresponds with potential security benefits.

Here, learn how Kubernetes can add value to multi-cloud security planning, as well as how to evaluate and select the right deployment option.

Security benefits of Kubernetes in multi-cloud environments

The foundation of Kubernetes' security value proposition comes from its function as a framework to automate common management challenges. For example, when deploying VMs or application containers, orchestration manages provisioning, deprovisioning and as-needed resource scaling, as well as workload prerequisites and dependencies, such as secrets management.

To understand the value of Kubernetes for multi-cloud architecture, consider the difference between a store credit card and a bank credit card. The store card provides access to extra features -- such as increased discounts and loyalty programs -- but is limited to one store. A generic card, alternatively, enables the cardholder to shop anywhere but without access to some or all of the extra benefits.

The orchestration provider subsumes some security-critical elements in its offerings -- for example, enforcing adequate confidentiality for stored secrets, ensuring appropriate access controls to secrets and workloads, and verifying that new workloads are provisioned with appropriate configurations.

Built-in security capabilities of the orchestration platform present organizations with an alternative to the cloud provider's native capabilities and, more importantly, capabilities the orchestration platform understands. This reduces vendor lock-in because the native security services of the provider aren't called directly. Instead of re-implementing underlying services, organizations can, in most cases, move to another provider's implementation of that same orchestration platform. Thus, the underlying implementation can be swapped out as needed in a way that is invisible to the workload.

Likewise, supporting automation, metrics and custom tooling developed to interact directly with the orchestration platform can be easily redirected to a different environment should workloads move -- provided the features are supported by the platform natively.

How to deploy Kubernetes in multi-cloud architectures

Upon deciding to adapt orchestration into its multi-cloud security strategy, an organization needs to establish the best way to do so. This involves three major steps: defining the scope, understanding deployment options and selecting a deployment model.

Define the scope

First, decide the multi-cloud orchestration tool use case -- for example:

  • One vs. many cloud-managed environments. Does the organization need to support only one provider or multiple services at different providers?
  • Hybrid environments. Will both on-premises and managed cloud environments require support?
  • Blended container and VM environments. Is the same orchestration service used for both VMs and containers or just one of the two?

Understand deployment options

Once the scope is defined, IT leaders need to identify the right deployment model for the organization's needs and use cases. Compare the three primary deployment models -- managed Kubernetes services, open source models and roll your own -- as well as the capabilities of each before making a decision.

  1. Managed Kubernetes service. Many public cloud providers offer Kubernetes orchestration capabilities in their service set, including Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS) and Google Kubernetes Engine (GKE). These enable organizations to provision directly into their service using an as-a-service approach, alleviating the burden of setting up and configuring Kubernetes in-house.
  2. Open source. Open source options, such as Kubernetes Operations (kOps), as well as commercial counterparts, enable organizations to automate workload provisioning into cloud environments. Tools such as AWS Fargate work across a cloud-managed platform and into native Kubernetes deployments -- either in a virtual private cloud or on premises.
  3. Roll your own. Organizations may deploy the orchestration software themselves into a provider's elastic compute environment.

Select a deployment model

Reaching consensus on the right deployment option for safeguarding multi-cloud will depend on the organization's goals, context and security goals.

Managed Kubernetes services, such as EKS, AKS and GKE, enable a speedy rollout. Using standardized orchestration mechanisms under the hood, these services foster portability of workloads among providers but come with some degree of vendor lock-in, often because the cloud service provider wrapped the underlying tools in a unique management interface. This can benefit security operations staff who are accustomed to the cloud provider's administrative UI and security model but is less optimal when trying to normalize management views across multiple competing services from different providers.

Cloud-agnostic options, such as kOps and commercial options, can offset some of the managed Kubernetes services drawbacks in multi-cloud environments. But keep in mind what providers and what services will be supported. Currently, kOps supports AWS, Google Compute Engine in a beta form and Azure in alpha. IT leaders need to carefully evaluate the proposed usage to ensure selected tools are supported given the organization's service provider usage profile.

The roll-your-own option gives the greatest degree of flexibility, as any custom-developed tooling will be maximally portable. The downside of this approach is that it requires significantly more experience, acumen with the platform, and time investment from the operational staff who deploy and maintain it.

What this comes down to is that the right decision for any specific organization will vary depending on its requirements and usage. As with any other planning task, understand ahead of time what those requirements are, and systematically analyze them against deployment options to find the best approach.

Dig Deeper on Cloud security

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close