GP - Fotolia
Following the breakup and restructuring of Docker, D2iQ, the relaunched version of Mesosphere, looks to make its own pivot to Kubernetes management while avoiding its competitor's fate.
D2iQ's enterprise Kubernetes distribution, Konvoy, was released when the company re-launched in August, and this week it added a multi-cloud container management product, Kommander. This new product is similar to Red Hat OpenShift, which integrates other open source infrastructure automation projects such as the Istio service mesh alongside Kubernetes, and offers governance and security features such as secrets management.
D2iQ also said it will build its own CI/CD tool called Dispatch into Kommander. Few details are available about the underpinnings for Dispatch yet, and whether it, too, will compete with Kubernetes-affiliated CI/CD tools such as Spinnaker and Tekton, but its overall mission, according to D2iQ co-founder and CTO Tobi Knaup, is to support enterprise GitOps workflows.
The leaders of the company, founded in 2013, know what they're up against in a market crowded with competitors that's already begun to see vendor attrition -- namely, they're up against Google, IBM-Red Hat and smaller competitors such as Rancher. SearchITOperations caught up with Knaup this week to discuss D2iQ's plans to stay relevant in the cutthroat Kubernetes management market.
I'm sure you're aware of the huge news this week about Docker Enterprise being acquired by Mirantis. How does D2iQ avoid that same fate as it also pivots to Kubernetes?
Tobi Knaup: It is a space where size matters to some degree. To really make products enterprise-grade, it takes big investment. There's a list that the CNCF publishes of all the different Kubernetes distributions, and I think there are close to 100 -- a lot of them smaller companies. We're very confident that we can compete in this space -- we've got great investors, we raised a lot of money and we still have a lot of the money in the bank that we raised, [as well as] cloud-native talent, which is expensive and in high demand.
Tobi KnaupCTO and co-founder, D2iQ
What we've focused on since the early days of the company is building products that allow enterprises to use cloud-native tools, all the way back to Apache Mesos, Kafka, Cassandra and so on. DC/OS, which we launched in 2015, made Apache Mesos enterprise-grade. We focused on building a business, and an enterprise business, early on. We built relationships with some of the leading companies in the world. That continues to be our focus, building a successful enterprise business around the open source core.
Enterprise support for cloud-native open source projects such as Istio is also the focus for IBM and Red Hat. How do you go up against them?
Knaup: We have unique skills around data workloads. The original idea for Apache Mesos in 2009 was to build an abstraction layer on top of compute instances where you can run multiple different data services together, and they share the resources. That's been in our DNA since four years before we even started [Mesosphere] in 2013. And a lot of our customers, that's why they come to us for help, because they build these data-driven products that have real-time data at their core. We have 10 years of experience doing this, automating these data services, on any infrastructure, whether it's a public cloud, an edge location, private data centers, even cruise ships for one of our customers.
Rancher has been out there with the multi-cluster management value proposition for a while now. How does Kommander compete with them?
Knaup: One thing that is unique about Kommander, we think, is how closely it ties into the some of the workloads you can run on top. Workloads that are built with Kudo show up in the catalog in Kommander, and you can govern how different projects have access to these workloads. You can say, for instance, let's pick Kafka as an example. I want to use a certain version that I have vetted in production. So all of my production clusters in Kommander only have access to this stable Kafka version that I have vetted.
But at the same time, I have a lot of development teams that want to have the latest and greatest Kafka version with all the latest features. You can govern that. You can define which projects have access to which of these workloads. That works really well with workloads that are based on Kudo, but supports anything you can you can put in that catalog.
What is the difference between Kudo and the CoreOS Operators that Red Hat offers?
Knaup: One of the main problems we wanted to solve with Kudo is that it's just way too hard to create an Operator, and way too hard to use them… We actually saw that same problem years ago with Mesos. The equivalent of Operators on Mesos is called Framework. After we had built a few, we realized that there were a lot of common things we could extract into a toolkit. On DC/OS, that's called DC/OS Commons, and we used it for years to build up our data services instead of building a controller for each individual workload...
Kudo does the same thing and gives you a DSL that's YAML-based and feels familiar to people that have written runbooks or built automation for software before. They don't have to know the details of Kubernetes internals, like what is a controller? What is a control loop? Developers can just express their Operator in this configuration file, and then the Kudo universal controller sits underneath that.
The other side of it is to also to make it easier for Operator users… Kudo provides a command line interface that allows you to do things like install, upgrade, backup and restore, and if you know how to install Kafka, you also know how to install Cassandra. That's all based on feedback we heard from people at KubeCon and our customers and prospects who said, hey, this Operator concept is really complicated, and I tried it out and failed.
Does Kudo use the open source Operator Framework under the covers, or is it completely separate?
Knaup: It uses the controller runtime, which is a project out of Google.
Would you have to be using Konvoy or Kommander to use Kudo, or could it run on an OpenShift cluster?
Knaup: It runs on any Kubernetes. It installs on the cluster using a basic deployment of a pod with a controller, and you interface with it from kubectl. Or if you don't want that plugin, you can just use the CRDs directly as well.
Interview edited for brevity and clarity.