AlexOakenman - Fotolia
The battle for network edges is straightforward.
It's a conflict between the competing strategies of on-premises services versus those delivered by the cloud. Organizations are pushed to move services from on-premises equipment to the cloud on the promise of fixed -- or at least predictable -- costs. Conversely, features available on small branch and edge devices today would have filled a 40U rack 15 years ago. With today's technologies, keeping it local becomes a viable and attractive option.
Fundamentally, even the most cloud-leveraged option still requires some local services, but the question remains: Where is the line drawn?
Working from bottom to top, here are the bare-minimum bootstrap capabilities needed in typical network edges:
Network termination and Ethernet switching: Devices in the branch need to connect to something. As network providers move toward Ethernet termination, a dedicated router is no longer necessary; a Layer 3 branch switch is more than adequate. Furthermore, if you are prepared to go white box, you'd be stunned at how cheap MPLS and VPN features can be.
Wireless: It's not uncommon to find wireless capabilities built into network edge devices. However, the laws of physics limit the range of a single access point (AP). For anything more complex than a small office, additional APs will be required. This will require a reconsideration of the first point -- and whether you need to open the Power over Ethernet can of worms. However, unless you have a particularly large or dynamic environment, wireless management is a good candidate for the cloud.
DHCP: Dynamic Host Configuration Protocol services are required for both wireless and wired connectivity. Your common-garden home router can do DHCP, but there is more to it. Enterprise services, such as VoIP-handsets and wireless APs, typically rely on DHCP for provisioning. At a bare minimum, an edge device can provide DHCP-relay services to a cloud-based DHCP server. For hub-and-spoke networks, this may be a necessity, as managing hundreds -- or even thousands -- of branches may otherwise be impossible. However, depending entirely on remote DHCP services is a risky business; transient network conditions -- or even just a big mailbox sync -- may temporarily nix the DHCP DORA (discover, offer, request, acknowledgment) process. The protocol itself is quite resilient, but it doesn't take much to stump it. Keep it local.
DNS: Domain name system is another used-by-everything service, including any Microsoft Active Directory or Exchange server you have lurking about the place. It works just fine off premises, so local DNS is only necessary for the largest of branches.
Time services: Many critical applications and services depend on the Network Time Protocol. In most cases, atomic-clock accuracy isn't a requirement, but keeping everything in sync is. Pick a cloud time source and use it everywhere.
File and print: These necessities just won't go away. Centralized print and file stores are fine, but anyone who's regularly used a file server on the other side of an ocean -- WAN optimization, be darned -- will tell you the best tool to mitigate latency is an Ethernet cable. Given the privacy and legal issues of casting your data to the winds of Dropbox, and considering the cheap, good, branch network-attached storage products readily available, in my view, there isn't much of a case for deploying file and print on the network edge -- let alone the cloud.
Mail and mail filtering: Mail is perhaps the most pervasive of cloud services. It works, and it works well. Keep it in the cloud.
Security: This is where it is getting interesting. The temptation is to deploy all such services in the cloud -- from firewall to content scanning. However, network security doesn't work like that. Away from fancy security tools and eagle-eyed administrators, the branch can be considered a soft target for social engineering attacks, or just plain accidents. Web content filtering, intrusion detection and application control at the branch nips a variety of the threats in the bud, regardless of the site's size. It's still a good candidate for local enforcement.
Applications: Applications belong in the cloud, or at least in a data center. Edge devices are beginning to creep onto the market with embedded hypervisors. The temptation is to use these "free" computing resources to deploy local instances of applications. However, this is a round peg in a square hole. Handing over CPU resources to a guest is a gamble that a busy guest won't steal resources from something more important. One also inevitably pays a premium for the convenience of embedding virtualization into a switch, router or firewall, compared with just buying a microserver. Using an edge-embedded hypervisor is fine for point services with easy cloud failback, but I'd cancel those plans for an Exchange server at every site.
In short, the further up the stack you go, the better the argument is for moving services to the cloud. Conversely, the fundamental services need to stay where they are. Of course, I'm using the cloud in the loosest terms; we could be talking about cloud-native services, such as Zscaler or Meraki, an Amazon Web Services Virtual Private Cloud, or just your own data center. The dilemma is the same: whether to make branches self-sufficient or dependent on the cloud.
A cloud-centric model is helpful for dynamic scenarios where branches come and go; fewer moving parts means you can be operational tout de suite, and this can have a big business effect. However, in sites attached to fixed assets -- such as manufacturing or other permanent facilities -- there is an argument for tactically devolving risk. For example, the loss of a file store for 25 branch users would be a problem; one serving 5,000 users would be a disaster. Neither option is without risk, but the options available in modern network edges at least allow you balance your risks with your business demands.
Future network markets emphasize the edge
Building network security at the edge
Virtual networking integrates the network edge