UPDATED 11:40 EST / NOVEMBER 14 2024

Intel's Arun Gupta talks about the company's new composable AI platform, OPEA, with theCUBE at KubeCon + CloudNativeCon 2024. AI

Intel embraces open-source ethos with AI microservices platform

Open-source artificial intelligence projects are gaining popularity, and Intel Corp. is taking advantage of the momentum with their composable AI platform.

Open Platform for Enterprise AI, or OPEA, was launched by Intel this year as a vendor-neutral project with the Linux Foundation. Its potential is immense, according to Arun Gupta (pictured), vice president and general manager of developer programs at Intel.

“When you’re building a gen AI application, you need a whole bunch of microservices,” he explained. “The first combination of OPEA is all of those about 30-plus microservices. These are all cloud-native, so they’re published as containers and you can run them anywhere. Now, the microservices by itself are not very helpful. What’s really impactful for developers and the end customers is blueprints that sit by building those microservices together. We provide that diverse and wide set of integrations for you.”

Gupta spoke with theCUBE Research’s Savannah Peterson and Rob Strechay  at KubeCon + CloudNativeCon NA, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the value of composable AI and how Gupta hopes to see OPEA grow. (* Disclosure below.)

A composable AI platform for everyone

OPEA is guided by an 11-person Technical Steering Committee, with only two Intel seats. This reflects Intel’s commitment to open-source AI development. The company has partnered with collaborators such as Advanced Microdevices Inc., a microservices conductor company, and Gupta hopes to see the partner ecosystem expand.

“We want that TSE to be defining the roadmap for OPEA,” he said. “It’s a very vendor-neutral, that method … when OPEA was launched, we had about 15 partners. Now we have about 45-plus partners. We have seen that grow dramatically.”

OPEA’s composable AI tools enable customers to build their own retrieval-augmented generation chatbots that will deploy on any compute infrastructure. Next, Gupta aims to have the project available on all major hyperscalers.

“I want to have these OPEA solutions available in all five hyperscalers, in their marketplace,” he said. “Because right now, the solutions are validated on the hyperscalers, but it’s a bit of a manual step. I want a single-click deployment. I want to be able to see a wide set of integration, which are well-documented, a great customer adoption, single click.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of KubeCon + CloudNativeCon NA:

(* Disclosure: TheCUBE is a paid media partner for the KubeCon + CloudNativeCon NA. Neither Red Hat Inc., the headline sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU