Getty Images/iStockphoto

Tip

Micro-cloud data centers: IT's new work front

Edge IT, micro-cloud data centers, containerization and autonomous operations are spreading through enterprises. IT leaders must find new ways to deploy and maintain these assets.

Edge computing is expanding, and enterprises are rethinking data centers. One movement rapidly gaining traction is the decentralization of data center assets into micro-cloud data center deployments proximate to IT-intensive edge sites.

On the IT side, this entails moving infrastructure and systems to the edge, and containerization is central to this strategy. However, deploying and maintaining container health at the edge differs from doing so in a central data center. This article outlines the ground rules for successful container deployment and maintenance in edge micro-cloud data centers.

The micro-cloud data center and containerization revolution

First, let's look at why enterprises are expanding micro-cloud data centers and containerizing them.

With a micro-cloud data center, an edge location receives a pared-down version of the central data center, delivered in a single cloud-hosted rack that includes networking, storage, processing, security and apps. The goal of the micro-cloud data center is to provide edge locations with all the real-time compute they need in a self-contained package. This eliminates the need to access remote resources, such as the enterprise's central data center.

The goal of the micro-cloud data center is to provide edge locations with all the real-time compute they need in a self-contained package. This eliminates the need to access remote resources, such as the enterprise's central data center.

Containerization plays a vital role in these micro-cloud data centers because a containerized IT ecosystem at the edge, a miniature version of what one would find in the central data center, is sufficient to run an edge location's IT. Containerization enables edge locations to operate autonomously.

On the IT side, however, deploying containers in edge micro-cloud data centers presents significant management challenges, including resource sizing, security, consistency and support.

Resource sizing

IT might initially provision more storage and processing than an edge micro-cloud data center requires. This can lead to resource waste. Conversely, there may be occasions when the edge micro-cloud data center must scale up, such as during a major sale at a retail outlet. Nothing about these situations differs from what IT does in the central data center, but resource sizing and monitoring must now be performed across multiple edge sites. This increases the IT workload.

Security

Users and their security permissions come and go, and in some cases, individual users move among many different edge sites. A user might have different levels of access at each site, and these might even differ at the container level. In this scenario, security is harder for IT to manage.

Consistency

If each edge site uses the same OS and applications to run its micro-cloud data center containers, it is essential to maintain consistency. For example, a retail point-of-sale system can be replicated in containers for each retail store. To ensure smooth operation, it's important to keep the OS kernels, IT infrastructure components and applications at the same revision levels across all edge sites. This can be easier said than done.

Support

There will be inevitable system and network glitches in micro-cloud data centers and containers. In any given situation, it is critical to know who fixes them: IT or the cloud provider?

Micro-cloud data center and containerization best practices

Implement best practices in resource optimization, workflow orchestration and security measures that enable IT teams to navigate the complexities of containerized applications effectively, delivering greater value while protecting critical data.

1. Right-size each container for the application that will be using it

Containerization platforms, such as Kubernetes, were developed for cloud environments, where resource allocation and de-allocation are flexible and fluid. This fluidity in scaling up or down can present its own cost challenges. How can IT ensure that the cloud resources being provisioned are optimized to avoid cost overruns?

To address the cost challenge, IT should calculate the processing, storage and networking requirements for each edge container before deployment. In pre-deployment testing, resource allocations can be trialed through user stress tests to ensure they can handle anticipated edge workloads. Performance should be continuously monitored and rechecked by IT, with the ability to adjust resource usage upward or downward based on demand. These storage, processing and networking provisions and budgets should be discussed, negotiated and priced upfront with cloud providers and incorporated into cloud contracts and SLAs.

At deployment and during ongoing maintenance, each container should be checked to remove any extraneous files that may have been inadvertently left in it. The usual culprits are duplicate or "leftover" files from development and testing.

2. Orchestrate workflows with cloud providers

IT should actively manage micro-cloud data centers and containers and should not delegate this responsibility to cloud providers. Responsibilities include orchestrating application and data workloads and flows between cloud micro data centers and edge sites; establishing a protocol for container maintenance and upgrades; and ensuring that enterprise-level security and backups are consistently maintained everywhere -- on the cloud, at the edge and in transit.

3. Buttoned-down security

Five years ago, it was common practice to enforce security only at the network and cloud edges. But now, with containerization and the segmented networks used by edge micro-cloud data centers, security must be enforced at the entry point to each container on the network, as well as at the network and cloud edges.

User security permissions are a second issue because a user can work at multiple edge locations and use multiple micro-cloud data centers. Each micro-cloud data center contains a set of containerized applications, and the user might not have the same level of authorization for each container. User permissions change from container to container and from edge site to edge site. IT must ensure that each user, regardless of which micro-cloud data center and its containers they access, has the proper level of authorization to use those assets. The best way to do this is to actively work with edge site user coordinators who define these security access policies.

4. Formalize a micro-cloud data center and containerization management strategy

Micro-cloud data centers and containerization create complexity. To address this, IT departments collaborate with edge users and cloud providers to maintain micro-cloud data centers and containerized applications, but the ultimate management responsibility rests with IT.

IT should approach this responsibility aggressively, with defined policies and protocols for micro-cloud data center and container management. These policies and protocols should be used to define SLA requirements for cloud providers and incorporated into cloud contracts. Rigorous policies and protocols should also be in place for containerization, as many containers are mirror images of one another. It's up to IT to ensure that software, firmware, hardware and security for all these containerized assets remain in sync.

Mary E. Shacklett is president of Transworld Data, a technology analytics, market research and consulting firm.

Next Steps

An overview of containerized data centers and their benefits

Dig Deeper on Data center design and facilities