UPDATED 20:11 EDT / MAY 22 2025

Ajay Mungara, senior director of developer products and ecosystem -- data center and AI at Intel, and Chris Branch, AI strategy sales manager at Intel, talk to theCUBE about agentic workflows at Dell Technologies World 2025. AI

Why agentic workflows are solving AI’s scale problem

Agentic workflows are fast becoming the backbone of enterprise AI, enabling scalable automation that bridges on-prem systems and the cloud without adding complexity.

As AI adoption accelerates, companies are moving beyond experiments and into production-level deployment, and they’re doing it with tools that emphasize flexibility over rigidity. By using standard APIs, modular infrastructure and open-source components, agentic workflows allow teams to iterate quickly while maintaining control. This marks a critical shift in strategy: Businesses aren’t just chasing AI capabilities, they’re building architectures that make those capabilities sustainable, secure and adaptable to evolving environments, according to Ajay Mungara (pictured, left), senior director of developer products and ecosystem — data center and AI at Intel Corp.

Ajay Mungara, senior director of developer products and ecosystem -- data center and AI at Intel, and Chris Branch, AI strategy sales manager at Intel, talk to theCUBE about agentic workflows at Dell Technologies World 2025.

Intel’s Ajay Mungara and Chris Branch talk to theCUBE about agentic workflows.

“What we are really focused on right now is there is a lot of hardware. There is a lot of AI talk,” Mungara said. “And everywhere you go, everybody’s an AI company. Every use case is an AI use case. Everybody’s talking about the digital assistance, agentic workflows, all of it. To make AI real, it is very simple as well as very complex at the same time … when you have to deploy that at scale, either on the cloud enterprise or a hybrid, it gets really complex.”

Mungara and Chris Branch (right), AI strategy sales manager at Intel, spoke with theCUBE’s Dave Vellante and Savannah Peterson at Dell Technologies World, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how agentic workflows, powered by open standards and modular infrastructure, are enabling enterprises to scale AI more efficiently across cloud and on-prem environments. (* Disclosure below.)

How agentic workflows unlock scalable AI

Organizations adopting agentic workflows are increasingly turning to standard APIs and open-source platforms to simplify the deployment of AI at scale. By abstracting the hardware and infrastructure complexities, these workflows allow for seamless integration across diverse environments, giving companies the flexibility to shift workloads without rewriting code, according to Branch.

“The reason why I’m excited around agentic and this API endpoint thing is because in the past everybody had to develop on the silicon itself, and it was complicated. It took forever,” he said. “With the agentic workflow combined with APIs, what you can do then is have a dashboard that runs multiple models simultaneously. What that agentic workflow with these APIs allows is for companies to run those on different systems at different times in different locations without changing any of their code.”

This modularity also extends to inference use cases, such as chat interfaces, defect detection and IT automation. Each task might leverage a different AI model or compute resource, but with agentic workflows, they can all operate within a unified dashboard. Standards such as the Llama and OpenAI APIs are central to enabling this level of fluidity and collaboration between agents, according to Mungara.

“Without that standard, then this agent can’t even discover the existence of another agent, it doesn’t even know what to do,” he explained. “If you’re having those standards, then it really enables innovation to blossom. Thousand flowers can bloom everywhere, and you have to just stitch it together to get your best enterprise outcomes. That time-to-value you will only get if you have those level of standards.”

At the foundation of this vision is the Open Platform for Enterprise AI, which provides infrastructure-agnostic building blocks for generative AI. Supported by contributions from Advanced Micro Devices Inc., Neo4j Inc., Infosys Ltd. and others, OPEA allows enterprises to rapidly test, validate and deploy scalable solutions across cloud and on-prem infrastructure, Branch explained.

“We want it to be transparent, easy to use and a lot more enjoyable for us to go solve real-world problems rather than worrying about how we’re going to implement that,” he said. “We want to solve the hard problem so we can get to the point where we’re saying we are impacting business outcomes.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of Dell Technologies World:

(* Disclosure: Intel Corp. sponsored this segment of theCUBE. Neither Intel nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU