Torbz - Fotolia


Three requirements for a hybrid cloud computing deployment

As the hybrid cloud computing approach gains steam, organizations will need to pay close attention to cross-cloud connectivity and management, plus microservices.

Organizations must start to plan for hybrid to be their default cloud computing approach by putting in place the proper tools and processes.

Many IT professionals tend to think of a mixed-cloud approach, with different workloads running independently of each other, either on premises or in the public cloud. For the majority, however, the final reality will be far more of a hybrid cloud computing deployment, with workloads operating in a seamless, integrated manner across a combination of private and public clouds.

At the moment, the portability of workloads across different clouds is still fraught with problems. Issues with data latency, for example, mean that most organizations still operate a model in which all business logic and data for a specific workload are co-located on a single cloud. And it can be difficult to maintain highly responsive and available services when an organization has full control over the private cloud but differing levels of control over various public cloud workloads.

A true hybrid cloud computing deployment requires proper connectivity, management and support for emerging technology such as microservices.

Hybrid cloud connectivity

To start a hybrid cloud computing deployment, look to public cloud providers that support high-performance interconnectivity with on-premises systems. Amazon Web Services Direct Connect, Microsoft Azure ExpressRoute and Google Cloud Interconnect all provide this functionality, but they do not connect directly into private data centers. They terminate at defined points of presence (POPs), from where individual organizations must rely on leased lines or other WAN connections to their private facilities, using 802.1Q virtual LANs or Multiprotocol Label Switching links to ensure high levels of availability and performance.

Key capabilities should include workload provisioning ... and identifying the root causes of any problems.

Some colocation providers have POPs, however, so organizations can place their private clouds in these facilities to avoid this extra step. Other colos offer a mix of their own connectivity and connectivity from public cloud providers. They can also offer customers access to services from public cloud providers hosted within their own facilities at data center speeds, using in-facility connections.

Hybrid cloud management

There is also a need for overarching hybrid cloud management, which has to be capable of monitoring and controlling workloads regardless of where they reside.

Key capabilities should include workload provisioning (generally through the use of containers, such as Docker or LXD) and identifying the root causes of any problems. Lifecycle management of such workloads, including the ability to close them down and recover resources and licenses as necessary, is another important feature.

Hybrid cloud and microservices

Organizations should look for advanced functionality, such as the ability to use catalogs of microservices and negotiate technical contracts on the fly. As yet, there is little in the way of standards to help drive such an approach, but for a hybrid cloud computing deployment to fully meet its promise, the capability for one service to make a call for a responding service has to be one where loose coupling rules.

At the moment, the vast majority of microservice couplings are essentially hard-coded -- calling service A knows where responding service B is, and the interactions are coded accordingly. In the future, there will be a need for the calling service to be able to request and utilize any responding service, whether in a private or public cloud, as long as it meets a set of technical and business policies, such as transactions per time unit and cost per use.

Dig Deeper on Data center ops, monitoring and management

Cloud Computing
and ESG