Enterprises that adopt cloud computing often find themselves with multiple cloud providers due to geographic coverage of providers, differences in available features or the desire to avoid lock-in. Multi-cloud is usually more expensive, and the way multiple clouds integrate with each other and data centers can significantly affect costs.
The key step when reviewing multi-cloud integration is to consider workflows. Most enterprises would have little difficulty with integration if multi-cloud strategies affected only hosting costs. The problem is that cloud-based applications move data in workflows that link application components. When these workflows cross cloud boundaries, they almost always generate ingress and egress charges.
Avoid high multi-cloud integration costs -- primarily due to ingress and egress charges -- through careful planning and proven best practices.
Rule 1. Don't spread components across clouds
Enterprises shouldn't spread an application's components across multiple clouds. This method creates additional costs and the only strategy to avoid them is to rethink how application components are deployed. Application deployment policies should prohibit multi-cloud distribution of application components and require specific justification to violate this rule.
Following this rule might mean enterprises have to duplicate some applications in multiple clouds so work doesn't move between cloud providers. This duplication is most likely required when enterprises adopt multi-cloud for user geographic distribution, either to optimize performance or address governance issues.
Enterprises that choose multi-cloud due to feature optimization of applications might need to pick the single cloud best suited for the application and accept suboptimal feature support in order to manage cloud costs. Hosting applications in multiple clouds increases hosting costs. As a result, enterprises should compare hosting costs with the effect of ingress and egress charges on overall costs to make the optimum decision.
Ingress and egress multi-cloud cost control strategies most often go wrong because of failover and cloud bursting. Cloud bursting is when a component instance is created in response to a failure or increased application loading. In either situation, it's important to frame policies and practices to deploy new instances of application components in the same cloud where that application is normally hosted. If enterprises use multi-cloud primarily to back up cloud resources, they should select an on-demand pricing model for backup resources to avoid paying for unnecessary hosting.
Rule 2. Don't use data centers or VPNs to funnel traffic between clouds
When managing multi-cloud integration costs, don't use the data center or a VPN to funnel traffic between clouds. Ingress and egress fees are related to traffic flows in and out of each cloud. It doesn't matter in terms of cost whether that traffic flows through the data center as an intermediary point.
Many multi-cloud applications still funnel through data centers or VPNs. Even if creating a route through the company VPN or data center doesn't affect specific costs compared with a cloud-to-cloud connection, it still affects application response time and quality of experience (QoE). Users are gradually more aware of this rule and report that they routinely address violations of it in application redesigns.
Rule 3. Interact with data where it's stored
Move work to data, not data access to the work. When designing an application, it shouldn't access data that's hosted elsewhere. This is because parsing the data results in traffic that's subject to ingress and egress charges. Enterprises that need to access a database to process a transaction should host that part of the application where the data is stored. This is usually the data center.
A corollary of this rule is if enterprises can't move work to the data, they should store data where it has to be used. Few enterprises are willing to host all their mission-critical data in the cloud. So, if data access is essential, the best strategy is to pass the work into the data center where it's stored.
When response time and QoE demand that data access integrates with cloud processing, it's often possible to move a summarization of a mission-critical database into the cloud. In this case, it's important to move the database to all clouds in the multi-cloud environment where users perform the work associated with that data. Otherwise, costs for data movement rise.
Other integration cost issues
While ingress and egress traffic charges are the biggest issue in optimizing multi-cloud integration costs, other challenges are common.
Differences in cloud service and feature costs across a multi-cloud environment can generate unexpected integration costs. Cloud providers don't charge for usage, traffic or features in the same way. Overall costs change depending on where application components run, so be aware of differences among providers. Enterprises might find it beneficial to select a different set of cloud providers if the cost differences are significant.
The connection to the company VPN can raise challenges. It's possible to extend the company VPN to the cloud using direct leased connections or via the internet through technology such as software-defined WAN. These connections can create a path between clouds if multiple clouds are connected to the VPN. Network topology updates can discover and potentially use this path. The path's availability can also disguise situations where components in different clouds exchange messages, which creates ingress and egress costs. It's prudent to manage VPN connectivity carefully and permit only explicitly authorized traffic to pass from VPN to clouds.
Consider application design and tradeoffs
Nearly all multi-cloud integration problems can be solved only through careful application design and component deployment decisions. Integration consoles or tools can help uncover the problems that application modifications or deployment management must solve.
Remember that some multi-cloud justifications, including failover and cloud bursting across clouds, inherently include tradeoffs among additional costs, performance risks and availability problems. A decision to provide a second cloud in hot standby mode, for example, would likely double cloud costs.
Enterprises must decide if they can justify that cost given the chance they need those resources. That's a business decision, and an incorrect decision could be riskier in terms of multi-cloud costs than any other factors cited here. Like application design and deployment policies, it's important to analyze risk and reward.