WavebreakMediaMicro - Fotolia

Tip

How cloud-native principles affect IT operations

Cloud-native technologies have revolutionized application delivery, but their shifting impact on IT and network architecture must be examined closely to maximize effectiveness.

Cloud-native development techniques are widely discussed, but there's not enough conversation about the operations implications of these cloud-native approaches. What must IT and network operations do differently from each other?

Cloud native describes applications that exploit cloud benefits, such as the following:

  • elasticity under variable load;
  • resilience in the face of failures;
  • reusability of components; and
  • hosting independence to support hybrid and multi-cloud deployments.

Delivering on these benefits demands contribution from both software development and cloud infrastructure -- including middleware tools.

Cloud-native technology is a symbiotic mixture of development practices, middleware, servers and network technology. It shifts thinking from application deployment to feature component deployments that are composed into applications. This shift must drive changes to development practices and users' overall conception of infrastructure or resources under operations control.

Consider your cloud provider carefully

The first -- and perhaps largest -- operations implication of a cloud-native model is that you must plan for the cloud platform, rather than for applications. For cloud-native approaches to work, applications must use a common set of tools that supports deployment and redeployment consistently, regardless of where components run. The tools' capabilities and requirements imposed on software must then be communicated back to development teams.

Applications require DevOps-forward communications between developers and operations, while cloud-native environments require OpsDev, or Ops-centric, thinking. To implement a cloud-native operations plan, follow these two steps:

1. Decide on a mechanism for microservice access.

There are two broad options: API gateways or brokers and service mesh. API gateways are familiar to most organizations, but service mesh technology is more flexible and, in the long run, likely to offer the lowest operations burden. The more cloud-native adoption expected -- and the greater the speed of adoption -- the more likely it is that service mesh will be necessary.

2. Maximize resource equivalence.

Like all software, microservices have software and hardware requirements. In cloud-native deployments, it's common to use orchestration features, such as Kubernetes' node affinities and taints and tolerations, to steer microservices toward suitable hosting points. OpsDev cooperation aims to establish a reasonable number of hosting classes into which all microservices must fit. Without this task accomplished, the process to steer pods to nodes is complex, expensive and error-prone.

But the effort to maximize resource equivalence doesn't stop there. Hybrid and multi-cloud deployments need resource tools spread across different hosting platforms. Those platform differences should not affect hosting decisions, or your organization's resource pool will offer different performance characteristics across its various domains.

Choose cloud services and features -- and local hosting capabilities -- with the goal of uniformity. Then, adopt a master orchestrator, such as Google Anthos or AWS Outposts, to manage deployment across all clouds and data centers.

Network architecture is key

Networking is the final issue to address in resource equivalence. Companies almost always have VPNs that connect users to the APIs that provide access to applications. These VPNs create a user address space that requires maintenance to remain as static as possible. Public cloud services and application hosting are usually based on a private address space that permits cross-communication between application components.

In cloud-native terms, this means microservices. Where application components communicate outside the application's own address space, the connectivity might require explicit support through a form of address translation.

As the amount of intercomponent traffic increases, connectivity management can become complicated. A private address space or a subset of the company VPN connects all the cloud-native components, via an API gateway or a service mesh. This level of growth in intercomponent traffic is a good indicator to adopt a service mesh. This new component address space can then link to the user address space where user APIs are involved.

Because there are so many moving parts associated with building a consistent platform for cloud-native applications across hybrid and multi-cloud environments, it's critical to draw out the addressing framework. Additionally, document the hosting classes for which your IT team intends to offer operational support, and how affinities, taints and tolerations steer pods to nodes properly.

Maintain an open dialogue with the development team, continuing the OpsDev flow as new application models emerge. A cloud-native plan is a technology partnership, but it's an organizational one first and foremost. Without an effective and continuous exchange of plans and options, even a successful start on cloud-native operations isn't enough: Things will simply go wrong at a later point.

Next Steps

A beginner's guide to cloud-native application development

Five essential steps to build a cloud-native strategy

Explore cloud-native vs. cloud-based vs. cloud-enabled apps

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close