Andriy Solovyov - Fotolia

Kubernetes management for 'Minecraft' has enterprise IT traits

What do 'Minecraft' servers and enterprise databases have in common? In the realm of Kubernetes management, more than you might think.

A Minecraft social network's container management challenges reflect those faced by many large enterprise IT shops.

Mineteria Inc., a small startup with a distributed workforce of 10 employees and equity partners, at first seems like a typical greenfield case for Kubernetes management. Unlike large enterprise companies, it doesn't have to worry about legacy applications and infrastructure; its business is primarily conducted on the web, where it runs forums, a store and hosts online events for Minecraft gamers.

But behind the scenes, Mineteria's Kubernetes management workflow is more similar to an enterprise stateful application environment, with persistent network connections and high availability requirements, than to those that support ephemeral web applications.

"What's unique about Minecraft is that we have long-lived, persistent [network] connections," said Zach Schrier, CEO at Mineteria. "With a web application, if there's a small issue, you can just refresh the page, and it'll come right back. But with a gaming application, it's particularly important we maintain the [application] state for all the players."

Kubernetes management tricks for persistence pay off

When Mineteria first launched its service in 2016, it used Google Cloud Platform and then Google Kubernetes Engine, partially paid for with Google Cloud for Startups program credits. But, as the company grew, it struggled with Google's bandwidth pricing, which went as high as $0.12 per gigabyte.

Mineteria also qualified for DigitalOcean's startup program, DigitalOcean Hatch, which had much more attractive bandwidth pricing tiers and charged just 1 cent per gigabyte for traffic that exceeded built-in bandwidth allowances for droplets, DigitalOcean's name for cloud VMs, Schrier said.

Mineteria uses DigitalOcean's hosted PostgreSQL databases for most stateful application data associated with its applications. However, given the persistent network connection and high-availability concerns, Mineteria has also created its own Kubernetes management applications to deploy and maintain workloads.

An application written in Go, called Mineteria Controller, deploys Mineteria's applications onto DigitalOcean Kubernetes pods once developers check in code to Git and Jenkins completes initial builds. Mineteria Controller also helps to handle how traffic is directed to new versions of the application, so the persistent network connections aren't interrupted.

This process -- a version of a canary application deployment pattern -- is uniquely suited to containers and Kubernetes, Schrier said.

"All of our competition, at 4 a.m. every day, they'll say, 'Everyone get off [our network]. We're rebooting our system to apply updates.' And we don't have to do that," Schrier said. "We can do a deployment at peak time when there are thousands of players online, because we don't shut down the old servers; we just spin up new ones and wait for people to flow into them."

Packaged versions of deployment apps, such as Netflix Spinnaker, typically switch traffic to new nodes more quickly than Mineteria would like.

"It's not just a website," Schrier said. "We can't just shut the old pod down right away and let the network connection be lost."

Kubernetes default settings for container placement on worker nodes are based on CPU thresholds. This isn't granular enough for Mineteria's needs -- workloads might scale unpredictably if a large YouTube influencer directs traffic to its network. To handle this, the Mineteria Controller assigns workloads through the Kubernetes API and the DigitalOcean API, so servers that may see heavy traffic aren't assigned to the same pod.

Kubernetes management growing pains in the cloud

The DigitalOcean Kubernetes service, which Mineteria uses in limited preview, handles much of the low-level networking administration and maintenance of Kubernetes master servers for the company. But the early stage service still has kinks to work out.

DigitalOcean doesn't have a managed service for Redis, the in-memory database, so Mineteria manages its own high-availability deployment of that application in three connected pods within its Kubernetes infrastructure for now. Mineteria conducts chaos tests to ensure the Redis service will remain online if a droplet fails.

DigitalOcean also lacks a native container registry, so Mineteria still uses Google's version to host container images.

We could rip DO Controller off our clusters, and while it wouldn't launch new VMs, the Mineteria containers wouldn't break. ... If one process dies, we won't have a catastrophic failure.
Zach SchrierCEO, Mineteria

Another application created by Mineteria, DO Controller, automatically provisions DigitalOcean droplets as the cluster grows, although a feature limitation in DigitalOcean Kubernetes keeps part of that process manual for now. DigitalOcean doesn't support custom Kubernetes labels, which Mineteria needs to determine where to place Kubernetes pods on hosts, through its user interface. So, Mineteria admins must manually run kubectl commands to perform this function.

Here, a microservices approach ensures high availability for Mineteria's applications, regardless of whether there are problems or delays in the droplet provisioning process.

"We could rip DO Controller off our clusters, and while it wouldn't launch new VMs, the Mineteria containers wouldn't break," Schrier said. "That separation of concerns through microservices means that, if one process dies, we won't have a catastrophic failure."

As the DigitalOcean Kubernetes service moves toward general availability in 2019, Schrier's team must still deal with a few disruptive upgrades, such as the introduction of a feature that makes Kubernetes load balancing more efficient. A new setting for DigitalOcean load balancers used with Kubernetes will reduce latency and improve throughput, because it reduces the number of network hops needed to route traffic to an application. But it will require Mineteria to rebuild and reboot its cluster, with a 30-minute maintenance outage.

DigitalOcean plans to add a hosted container registry and a hosted Redis database service, a company spokesperson said. Redis will be available in the third quarter of 2019. The company also plans to address the custom node label issue Mineteria currently handles manually, though no specific time frame for an update is available. Once the service reaches general availability, updates such as the Kubernetes load-balancer setting will be delivered to users automatically and nondisruptively, the spokesperson said.

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close