ra2 studio - Fotolia
For many organizations, the gradual adoption of cloud computing was the right strategy to take. It allowed companies -- and their IT staffs -- to adapt to changes in application and data access. Yet, from a network perspective, this migration methodology has come at a cost. Early on, enterprises used cheap and rapidly deployable VPN tunnels for access to their IaaS providers. At the time, the VPN was sufficient for handling commonly moved workloads. However, many enterprises are now getting to the point where high-bandwidth and latency-sensitive applications are shifting to the cloud. It's at this point where the standard VPN won't cut it. So what's the fix? The answer to this question is dedicated cloud interconnects.
Dedicated cloud links
Dedicated cloud interconnects don't simply provide guaranteed bandwidth and lower latency. They also solve several additional issues network administrators face today. For one, moving to a dedicated cloud interconnect eliminates the increasing strain on internet bandwidth at corporate offices. Because VPN connections to the cloud use the public internet for connectivity, corporate users attempting to access cloud applications are competing for a limited amount of bandwidth. Offloading cloud content to a separate connection alleviates the internet edge as a potential bottleneck.
Second, direct interconnect options come with a much-coveted WAN service-level agreement (SLA) from the provider. Because VPN tunnels use the public internet for underlying connectivity, uptime can never be guaranteed. There are simply too many variables outside the control of the provider and customer to offer it. However, with direct connect links, the provider fully manages and controls WAN communications from end-to-end. Thus, it can be backed by a standard WAN SLA.
A third benefit can be gained if you have a project that requires moving massive amounts of data in, out or within cloud availability zones. Significant cost savings can be found when shuffling lots of data around. This, of course, is in addition to the throughput increase you will gain when using dedicated cloud interconnects. In most cases, you choose what bandwidth your organization requires for its direct connection -- then pay for that bandwidth on an hour-by-hour billing schedule. Moving data into the cloud is either free or assessed a nominal charge. Accessing data from the cloud is typically more expensive; however, compared to other options associated with downloading thousands of gigabytes, the dedicated interconnect is considerably less costly.
Dedicating cloud links
If you're interested in looking at dedicated cloud interconnects to replace your existing VPN tunnels, your first task is to verify your cloud provider offers this type of WAN option where your apps and data reside. If you use one of the global IaaS providers, this will almost certainly be an option. That said, understand many providers use an intermediary data center partner to connect to their clouds. For example, AWS has a list of Direct Connect locations that are essentially third-party data centers. To get direct connectivity into AWS, the customer first connects to the third-party data center. The third-party data center partner completes the second half of the connection by linking directly to AWS data centers. While some cloud providers do allow customers to link directly to their cloud data centers, it's more common to see geographically dispersed third-party data centers used as aggregation points for direct connect customers.
Last, note the most common direct connect network handoff is a Layer 2, 802.1Q trunk. While this allows for simplified connectivity and flexibility, it isn't as resilient as other standards. Other providers offer more robust Layer 3 connectivity options using dynamic routing protocols. Microsoft's Azure, for example, allows for multiple direct connect links between its cloud data centers and the customer. The border gateway protocol can then be used for redundancy and load balancing of both pipes.