markrubens - Fotolia
A growing number of organizations mix in-house legacy applications and modern microservices applications that live on public cloud. Many choose this hybrid cloud architecture for the economic benefits, while others adopt the approach to retain some control over IT infrastructure. All must grapple with hybrid cloud operations to maintain application performance.
To prepare your organization's existing virtualized infrastructure for its hybrid cloud future, align servers, microservices applications and distributed storage so that workloads can move to and from the cloud without bottlenecks. A high-performance hybrid cloud architecture requires constant monitoring, management and maintenance, as well as a change to some core fundamentals, such as what an application is.
Follow these five principles to make sure a hybrid cloud architecture can handle communication and execution between microservices applications and other workloads.
1. Think in workloads
Hybrid cloud management changes how we think about microservices application deployment. All application services -- whether in the cloud or on premises -- are workloads, which collectively make up a larger workflow. A workload, in a hybrid cloud architecture, is something that is executed as an independent service or collection of code. It must include all network, hosting and web service features that the application might use.
Create a management strategy for hybrid cloud applications with this deployment unit in mind. There are many types of workloads in the hybrid cloud -- batch, transactional and analytical, for example -- and each entails different computing requirements. Furthermore, the abstraction inherent in a cloud architecture does not suit all workloads, as some need high-performance network storage or negligible latency.
2. Global load balancing
While most public cloud providers offer load-balancing services, these typically don't operate outside the vendor's environment. This lack of interoperability complicates load balancing for tasks like cloud bursting, where data has to move across a cloud boundary before processing can start. To avoid bottlenecks and maintain latency levels in a hybrid cloud architecture, a load balancer must work across all clouds used, including the private infrastructure on premises. Service mesh technology, such as Istio and Linkerd, provides specialized load balancing for microservices applications.
Global server load balancing (GSLB) is the intelligent distribution of internet traffic across multiple servers located in diverse geographies, which a hybrid cloud environment demands. With a GSLB system, no single location is overloaded by requests or slowed down by bottlenecks.
3. Pick the right tools
While it is challenging to manage a hybrid cloud application, the right set of tools makes a big difference.
Microservices often deploy on highly portable containers in the cloud. Evaluate specialized tools for configuration management in a Kubernetes container orchestration environment, such as Helm and Spinnaker. These tools provide an additional abstraction level for managing infrastructure, and they create charts and pipelines for greater control and efficiency in infrastructure creation and management.
In addition, Rundeck is an open source automation tool that helps with workflows and configuration management. Try a combination of tools to get the job done.
4. Set a standard
Orchestration across multiple clouds can be tricky because there is no uniform standard from one platform to another. Microservices application policies are ingested in many formats across cloud platforms. There's also a complex variety of hosting setups that can make up a hybrid cloud for microservices, such as public and private cloud, physical and virtual instances, and VMs and containers. To apply policies uniformly, choose a cloud orchestration technology that uses a common language as a contractor between the network devices and everything that interacts with them.
One standard that has gained in popularity for its ability to help users shift workloads between clouds is the Topology and Orchestration Specification for Cloud Applications (TOSCA). TOSCA was created by Oasis, a nonprofit organization, for the purpose of cloud application portability. The TOSCA open source language is a part of cloud orchestration and management tools and frameworks, such as Cloudify, Ubicity and Alien4Cloud.
NETCONF (Network Configuration Protocol), overseen by the Internet Engineering Task Force (IETF), is another multi-cloud protocol. It works with YANG (Yet Another Next Generation), a network data modeling language defined by the IETF. For cloud management, pick tools that adhere to these standards as one way to maintain portability.
5. Count the cost
A crucial component for organizations that create and maintain a hybrid cloud architecture is economics -- how to keep the cost down.
Tools that analyze cloud pricing include Microsoft's Cloudyn, VMware's CloudHealth Technologies, CloudCheckr, Densify (formerly Cirba), CloudAware and Cloudability. Pick a tool only if it works with the application's current and expected future cloud providers.
Actively track all chosen cloud providers' traffic and pricing policies. Hybrid and multi-cloud users must beware of overspending through cloud providers' egress fees for when data exits their border. This type of expense means that an organization will pay when application traffic exits one cloud or even when it moves across regions within a single cloud vendor's domain. That setup is not economically practical, and it's good practice to make sure all your cloud providers share your VPN address space and have their own address range within your VPN.
To manage multiple workloads in a hybrid architecture, organizations must weigh the many aspects of cloud operations, which include load balancing, costs, standards and automation. Keep these best practices in mind to reap the benefits of hybrid cloud setups for microservices.