Q&A: D2iQ focuses on managing Kubernetes day 2 operations
Kubernetes, arguably the most popular open-source container clustering and orchestration platform at the moment, can be seen everywhere. Its adoption is soaring from cloud-native, on-premises to edge locations. Is it possible that Kubernetes as a service soon becomes the new buzzword?
With some help, setting up a Kubernetes cluster can be simple, but running its day-to-day operations is another story. D2iQ Inc., formerly known as Mesosphere, recently announced a new solution called Kubernetes Universal Declarative Operator, or KUDO, which promises to help Kubernetes operators beyond the installation, starting from day 2. Along with other offerings, D2iQ looks to position itself in the Kubernetes management and cloud-native space.
“What’s happening is, people are running more and more different workloads on top of Kubernetes,” said Tobi Knaup (pictured), co-founder and chief technology officer of D2iQ. “It is definitely the de-facto standard for doing cloud-native, and people are putting it in a lot of different environments. They’re putting it in edge locations.”
Knaup spoke with Stu Miniman (@stu), host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, and guest host John Troyer (@jtroyer), chief reckoner at TechReckoning, during the KubeCon + CloudNativeCon event in San Diego, California. They discussed D2iQ rebranding, KUDO, multi-cluster deployments, and the customer journey. (* Disclosure below.)
[Editor’s note: The following has been condensed for clarity.]
Miniman: For our audience, explain a little bit why the rebrand focus of the company, day 2 operations … and why is your team specifically well positioned for that environment?
Knaup: The rebrand we did because, obviously, our old company named Mesosphere has Mesos in it. And we’ve been doing a lot more than [Mesos] actually for many years. We help customers run Apache Kafka and Spark and Cassandra. We’ve been doing a lot with Kubernetes also for some time now. So that was the reason for the rebrand. And, so, we wanted a name that doesn’t have a particular technology in it.
And so we were looking for what really expressed what we do, what we help our customers with. And we’ve always been focused on day 2 operations — so everything that happens after the initial install. And then the IQ really stands for a couple of things. First of all, we try to put a lot of automation into our products to make those products smart to help our customers.
Troyer: So one of the projects that you are now working on with your customers and partners and the bigger ecosystem is a way of approaching operators. Can you talk a little about KUDO … and how you’re bringing that here to set at the table and what some people’s experiences with that are?
Knaup: When we looked at the Kubernetes operator space, we saw some of the same challenges that we had faced years ago. Building a Kubernetes operator requires to write a lot of code. Not every company has Go programmers … to write an operator. And more importantly too, once you write those 10,000 lines of code or more, you also have to maintain it.
So we wanted to simplify that. We wanted to create an alternative way for building operators that doesn’t require you to learn Go, doesn’t require you to write code. It works with just this orchestration language that KUDO offers. And then for the KUDO users, the API is the same across these different operators.
Miniman: Looks like you’ve got some tooling here to help simplify that environment and make it easier because, of course, your application developers don’t want to worry about [the underlying infrastructure]. That’s the promise of things like serverless … so where specifically do you target and what are you hearing from customers as to how they’re sorting through these organizational changes?
Knaup: What a lot of folks are doing now is they’re running on various different types of infrastructure. They’re running on multiple public clouds. They’re running on the edge. So, how do you bring this API-driven deployment of these services to all these different types of locations? And so that’s what we try to achieve with KUDO for the data services and then with our other products too, like Kommander, which is a multi-cluster control plane.
It’s about when organizations have all these different clusters. How do you then manage them? How do you apply configuration consistently across these clusters? Manage your secrets and RBAC rules and things like that? So those are all the Day 2 things that we try to help customers with.
Troyer: What do you think is going to make for a successful partnership with you and a customer? What qualities do you need to have by the time you’re growing up in production and then also as they’re making choices here, what should the end-users be looking at?
Knaup: What we realized is that we need to partner with folks even at the very first steps, where they’re just getting educated about this space. What are the containers? How are they different from VMs? What is this cluster management thing? How does this all fit together?
Besides all of the software that we’re building, we also offer training, for example. And so we just try to have a conversation with the customer — figure out what their needs are, whether that’s training, whether that’s services or different products. And the different products that come together in our Kubernetes product line, they’re really designed to meet the customer at these different stages.
There’s Konvoy; that’s our Kubernetes distribution, get your first project up and running. Then once you get a little bit more sophisticated, you probably want to do CI/CD. Meet the customer where they are, and I think education is a big piece of that.
Miniman: Give us your viewpoint of the ecosystem broader as to what next things need to be done to help even further the journey that we’re all on?
Knaup: What’s happening is people are running more and more different workloads on top of Kubernetes. [It] is definitely the de-facto standard for doing cloud-native, and people are putting it in a lot of different environments. They’re putting it in edge locations.
So I think we need to figure out, how do you have a sane development workflow for these types of deployments? How do you define an application that might actually run on multiple different clusters? So I think there’s going to be a lot of talk … in a layer above Kubernetes, right? How can I just define my application in a way where I say maybe just run this thing in a highly available way on two different cloud providers?
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the KubeCon + CloudNativeCon event. (* Disclosure: D2iQ Inc. sponsored this segment of theCUBE. Neither D2iQ nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU