Q&A: VMware, Bitfusion tackle AI infrastructure and alternative processors
In July 2019, VMware Inc. announced its acquisition of Bitfusion.io Inc., a pioneer in virtualization with a knack for artificial intelligence to boost graphics processing units and field programmable gate arrays. VMware’s ultimate goal with the acquisition: help businesses more efficiently use AI technologies on-premises and in hybrid cloud computing environments.
Mike Adams (pictured, left), senior director of cloud platform product marketing at VMware Inc., and Ziv Kalmanovich (pictured, right), product line manager at VMware, spoke with Dave Vellante (@dvellante) and John Furrier (@furrier), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during this week’s VMworld event in San Francisco. They discussed the recent acquisition of Bitfusion, the trend of alternative processors, and the impact of vSphere (see the full interview with transcript here). (* Disclosure below.)
[Editor’s note: The following has been condensed for clarity.]
Furrier: For the folks that don’t know much about the acquisition, what was the motivation, what was the company’s core product, what was the interest?
Adams: The company had a product called FlexDirect, and that particular product was really focused on taking a similar concept that a lot of VMware-ites know — which was, ‘Hey, in the compute space, we try to take these isolated islands and pull them together.’ Same type of thing here: You had these expensive devices that people were buying, and they were isolated — and now, if we could take a single server, it’s got a bunch of GPUs on it, why don’t we share it, right?
You see all these papers that come out around machine learning, and at the very end, it says, ‘Geez, I’m amazed that these GPUs are so underutilized. Even when we’re actually using them.’ It’s kind of like buying a car and then using the radio only, right? It just doesn’t make sense.
Vellante: You’ve got this trend of alternative processors just … exploding all over the place. Talk about some of the trends you see in that regard and how you’re taking advantage of them.
Adams: Ziv and I see a lot of different types of devices and acceleration devices — whether it’s compute, or network, or storage — and in this particular case, we just see a hotbed of all these customers that are seeing this same problem. And we’ve got great partnerships with Intel … Nvidia, and many others, and we just want to really leverage those for these devices. Because you look at vSphere and say, ‘OK, traditional workloads, we’ve done those very very well.’ But as we get into containers, Kubernetes, machine learning, and AI, we want these newer cloud-native and newer work loads to come our way. And taking advantage of these new capabilities really helps accelerate that in a big way.
Furrier: Can you explain more on the vSphere impact? What should customers know?
Kalmanovich: I think that the first thing to clarify here is that often there is this question, ‘Why would I run ML or AI workloads specifically on vSphere as a platform?’ But then customers do run ML and AI workloads on public clouds, and those layers are not that different than vSphere — it’s a virtualization layer, and they are running it in virtual machines. So the whole idea of it, Bitfusion specifically, is that … we can make it even more efficient to run these workloads on top of vSphere. Because the underlying infrastructure that … you have to accelerate these workloads, today, they are mostly GPUs obviously — but in the future, as Mike also mentioned, new ASICs are coming in and FPGAs are coming in.
Furrier: If I’m operating vSphere … and I have developers kicking around the corner, and I have cloud; my whole hybrid climate, where does this fit in?
Kalmanovich: This fits into essentially any place where vSphere is running. It doesn’t matter if it would run on VMware Cloud, or any other of our cloud partnerships, or on the edge where vSphere runs — this is a core capability of vSphere. So it doesn’t matter where physically your infrastructure is; we would be able to expose this technology.
The idea is also that …. there’s an architectural change that’s also coming in, and then … the servers are actually getting more denser in their acceleration infrastructure that they have in them. So you’re seeing four to eight GPUs in a single server. Those are very powerful machines. You can’t just move all your workloads onto a single machine. Again, that brings us back to Bitfusion and this segregated model of accelerator use — which is very similar … to centralized storage use, for example.
Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of the VMworld event. (* Disclosure: VMware Inc. sponsored this segment of theCUBE. Neither VMware nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU