

Red Hat Inc. and Intel Corp.’s collaboration is all about translating open source code into efficient AI solutions, including the use of a virtual large language model.
vLLM is a library of open-source code that functions as an inference server, forming a layer between Red Hat’s models and Intel’s accelerators. Red Hat’s ongoing commitment to open-source solutions has fostered growth across the AI sector.
Intel’s Chris Tobias and Red Hat’s Ryan King get ready to discuss their AI solutions.
“We start to work with the people that are the innovators and we say, ‘Hey, look, how can we bring a more open approach to this?’” said Ryan King (pictured, left), global head of AI and infrastructure ecosystem at Red Hat. “Once that becomes an open project and it becomes like a standard for everybody, we work on kind of our standard pattern, which is how do we make that enterprise ready.”
King and Chris Tobias (pictured, right), general manager of Americas technology leadership and platform ISV account team at Intel Corp., spoke with theCUBE’s host Rebecca Knight and theCUBE Research’s Paul Nashawaty at Red Hat Summit, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed vLLM and Intel and Red Hat’s collaboration. (* Disclosure below.)
Companies are looking to implement AI in an efficient, cost-saving manner, which can mean running AI on-premises, choosing smaller models or using vLLM to speed up the output of generative AI applications. Intel has become a part of that equation with its new lineup of GPUs, according to Tobias.
“What we’re working with Red Hat to do is minimize that complexity, and what does the hardware architecture and what does all the infrastructure software look like, and make that kind of seamless,” he said. “You can just worry about, ‘Hey, what kind of application do I want to go with, and what kind of business problem do I wanna solve?’ And then, ideally, that gets you into a cost-effective solution.”
Intel and Red Hat have worked on a number of proof-of-concepts together, and Intel is fully compatible with OpenShift AI and Red Hat Linux Enterprise AI. Their collaborations have so far seen success from customers hoping to adopt AI without breaking the bank, according to King.
“Our POC framework has different technical use cases, and now that vLLM becomes more central and on the stage for Red Hat, we’re seeing a lot of interest for vLLM-based POCs from our customers,” he said. “[It’s] really simple for a model to be able to make itself ready day zero for how it can best run on an accelerator.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of Red Hat Summit:
(* Disclosure: Intel Corp. sponsored this segment of theCUBE. Neither Intel nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
THANK YOU