UPDATED 11:09 EDT / APRIL 28 2025

Arthur Lewis, president, infrastructure solutions group, at Dell, talks with theCUBE about enterprise AI infrastructure at the “Is Your IT Infrastructure Ready for the Age of AI? “ event – 2025. AI

Three insights you may have missed from theCUBE’s coverage of Dell’s ‘Is Your IT Infrastructure Ready for the Age of AI?’ event

Artificial intelligence adoption is accelerating, but many enterprises still struggle to connect the dots between their infrastructure and AI ambitions. Even the smartest models fall short without a modern enterprise AI infrastructure and data foundation.

Arthur Lewis, president, infrastructure solutions group, at Dell, talks with theCUBE about enterprise AI infrastructure at the “Is Your IT Infrastructure Ready for the Age of AI? “ event – 2025.

Dell’s Arthur Lewis talks with theCUBE about the company’s modern infrastructure solutions and handling on-premises data.

That’s where Arthur Lewis (pictured), president, infrastructure solutions group, at Dell Technologies Inc., sees the game changing. As data becomes the beating heart of enterprise AI, Dell is rethinking how infrastructure serves not just workloads, but algorithms, models and reasoning engines that evolve in real time.

“For years, customers have been on a digital transformation journey, and the underpinning of that has been the data,” he told theCUBE. “As we move into the world of AI, access and visibility to data become incredibly important. Silos of the past will be dismantled, infrastructure will be connected. Algorithmic innovation is going to drive smaller domain-specific models. So, you can envision modern data centers with a multitude of models, and data is going to be the fuel that drives those models.”

That message shaped the conversation at the “Is Your IT Infrastructure Ready for the Age of AI?” event, where theCUBE analysts Dave VellanteJohn Furrier and Rob Strechay sat down with Dell executives and enterprise leaders to explore what AI-ready infrastructure looks like. From disaggregated systems to chip flexibility and end-to-end cyber resiliency, Dell laid out its blueprint for enterprise AI infrastructure success. (* Disclosure below.)

Here are three key insights you may have missed from theCUBE’s coverage:

1. Enterprise AI infrastructure breaks the old rules: Disaggregation is the new default.

The era of AI has thrown down the gauntlet for enterprise infrastructure. Legacy models can’t keep up with the scale, speed and specialization that modern AI workloads demand. At Dell, that challenge is fueling a pivot toward disaggregated architecture — an approach designed to optimize flexibility and performance across compute and storage, according to Travis Vigil, senior vice president of product management at Dell.

Arunkumar Narayanan, senior vice president, server and networking products, at Dell, and Travis Vigil, senior vice president of product management at Dell, talk with theCUBE about enterprise AI infrastructure at the “Is Your IT Infrastructure Ready for the Age of AI? “ event – 2025.

Dell’s Travis Vigil and Arunkumar Narayanan talk with theCUBE about how enterprises are adopting disaggregated infrastructure to modernize data centers.

“What we’ve learned over the course of the last 10 years is that while hyperconverged was really great if people were focused on a singular ecosystem,” he told theCUBE during the event. “You need to move to a disaggregated architecture.”

This architectural shift isn’t just cosmetic, according to Vigil. It’s a practical response to the complexity of today’s AI environments, where organizations want to support multiple hypervisors, minimize waste and increase total cost-of-ownership efficiency. Underutilized cores in hyperconverged infrastructure deployments lead to wasted resources that don’t align with the economics of large-scale inferencing.

“What we’re seeing with the need to have choice and flexibility around which hypervisor you choose … has led us to a new conclusion, which is in order to get the most out of your infrastructure,” he said. “You can’t have a solution where your cores on your [hyperconverged infrastructure] are 20, 30% utilized.”

That mindset is driving Dell’s evolving portfolio. By decoupling compute and storage, customers can refresh each component on its own cadence, matching product cycles and budget timelines without being locked into a one-size-fits-all model, according to Drew Schulke, vice president of product management at Dell.

“Where we see this disaggregated architecture taking hold is, ‘Let’s decouple the storage and compute, allow us to maximize each,’” Schulke told theCUBE. “I can refresh each of those then on the cycle that they need to refresh and when it makes sense.”

Here’s theCUBE’s complete video interview with Drew Schulke:

2. Dell’s silicon strategy blends chip diversity with real-world design agility to support evolving AI demands.

AI isn’t one-size-fits-all, and neither is the silicon that powers it. That’s why Dell’s approach to compute leans heavily on flexibility, designing enterprise AI infrastructure that adapts to different models, data center sizes and performance needs. Dell’s collaboration with Advanced Micro Devices Inc. around the Turin processor stack illustrates that approach, offering flexible configurations designed to optimize both performance and connectivity, according to Derek Dicker, corporate vice president of the Enterprise and HPC Business Group at AMD.

Derek Dicker, corporate vice president, enterprise and HPC business group, at AMD, and David Schmidt, senior director, PowerEdge product management, at Dell, talk with theCUBE about enterprise AI infrastructure at the “Is Your IT Infrastructure Ready for the Age of AI?“ event – 2025.

AMD’s Derek Dicker and Dell’s David Schmidt talk with theCUBE about how a partnership between the two firms has helped advance compute capabilities for customers.

“What we’ve been able to do with input from Dell is architect a system that allows us to deliver those 128 lanes,” he said during the event. “You can shove 64 in the front, 64 out the back [and] have a balanced network connection but also have the ability to service your storage in the system.”

Dell’s PowerEdge servers, powered by AMD’s Turin chips, feature 12-channel DDR5 memory, designed to move data swiftly and efficiently. This kind of high-throughput memory architecture is becoming critical for AI-driven applications that demand heavy data movement, according to David Schmidt, senior director of PowerEdge product management at Dell.

“You’ve got memory to core ratios that have to keep scaling,” he said. “Workloads are scaling with the core counts, but it’s only because we’re able to design the systems that are providing the right amount of memory.”

Here’s theCUBE’s complete video interview with Derek Dicker and David Schmidt:

That same fit-for-purpose mindset applies to Dell’s ongoing partnership with Intel Corp. As generative AI adoption accelerates, organizations face pressure to simultaneously optimize power usage and performance, especially when deploying AI inferencing at scale. That’s where Intel’s Xeon 6 processor stands out: It’s a fresh blend of efficiency and compute muscle, according to Rakesh Mehrotra, vice president of strategy, product management and operations of data center and AI at Intel, and Jonathan Seckler, senior director of server networking, ISG and product marketing, at Dell.

“Xeon 6 is the best processor we’ve ever made,” Mehrotra told theCUBE. “It’s the world’s best [central processing unit] for AI workloads … We’ve seen improvements up to 40% across the broader set of workloads. That’s a huge number.”

For Dell, building around processors such as Xeon 6 is part of a larger strategy to create flexible enterprise AI infrastructure that can evolve alongside enterprise needs. That strategy includes tuning systems to balance power efficiency with workload requirements and contributing to shared industry standards, according to Seckler.

“I think that great AI requires great power,” he said. “By working on standards to improve efficiency and to leverage each other’s strengths, we are able to partner now with Intel with the introduction of the Xeon 6 processor.”

With Gaudi 3 accelerators in Dell’s PowerEdge servers, enterprises can run AI models directly through application programming interfaces, simplifying access to real-time inferencing. This approach gives developers more direct control over infrastructure performance and model deployment, especially in production environments that demand low-latency response times, according to Mehrotra.

“What we are doing with PowerEdge … is we are enabling AI inferencing workloads on Gaudi 3 by enabling API calls that you can directly do on this PowerEdge infrastructure,” he told theCUBE. “You have [application programming interface] into PowerEdge, and you can run the application by making a call through this API.”

Here’s theCUBE’s complete video interview with Mehrotra and Seckler:

3. Cyber resiliency sets the foundation for trusted, scalable AI.

Cybersecurity has officially climbed the corporate ladder, becoming a strategic priority as enterprises lean harder into AI and large language models. The focus has shifted from defending against attacks to ensuring enterprise AI infrastructure and digital operations can withstand and recover from them. That’s the essence of cyber resiliency, and it’s become mission-critical as data grows more valuable and vulnerable, according to  Rob Emsley, director of data protection marketing at Dell.

Rob Emsley, director of data protection marketing at Dell, talks with theCUBE about enterprise AI infrastructure at the “Is Your IT Infrastructure Ready for the Age of AI?“ event – 2025.

Dell’s Rob Emsley talks with theCUBE about how the company is positioning cyber resilience as an achievable, structured process.

“It’s still a boardroom discussion,” Emsley told theCUBE during the event. “We all know that a lot of discussions about AI are also taking place in the boardroom as companies move forward with their digital transformation initiatives. But the requirement to be secure, and especially to secure the data that you’re inevitably going to be relying upon to train your large language models is so vitally important.”

Rather than treat resiliency as an abstract goal, Dell frames it as a practical framework. Companies must assume breach, minimize their attack surface and embed recovery into core enterprise AI infrastructure operations, Emsley added. That priority includes anomaly detection, trusted recovery protocols and security baked into products such as Dell’s PowerProtect Data Domain.

“A lot of this is all underpinned by zero-trust principles to reduce the opportunity for people to get into your backup infrastructure,” Emsley said. “Then, you take that across all of the Dell portfolio of infrastructure, and every team within Dell is building intrinsic security into the platform. One thing that also keeps data secure is the “immutability” word. The immutability of backups becomes a critical element to reduce that attack surface.”

That view aligns with the broader philosophy Lewis expressed during the event: AI cannot thrive without trust in the data, and that trust deeply intertwines with infrastructure. The ability to fine-tune and iterate from the data center to the model layer depends on secure, stable foundations. Secure infrastructure is essential as AI evolves from narrow automation to reasoning systems.

“Ninety percent of the world’s data sits on-prem, and the majority of the data hasn’t even been created,” Lewis told theCUBE. “The evolution of fine-tuning is the continued training that allows inferencing to be optimal. We’re moving into reasoning and thinking models now. The fine-tuning that you’re doing is going to be way more important than the initial training of the model.”

Supporting this AI trajectory — both logically and physically — is the next frontier. For Dell, that mission starts with modernizing on-premises data centers. Aging facilities often can’t accommodate dense AI workloads’ space and power demands. Instead of migrating everything to the cloud, companies are investing in localized upgrades that support high-density compute and secure enterprise AI infrastructure, according to Arunkumar Narayanan, senior vice president, server and networking products, at Dell.

“Every enterprise, if they need to start doing AI, they need to create space for AI,” he said during the event. “They need to create power for AI. Data centers are like 10 years old, 15 years old, nobody has modernized that.”

Here’s theCUBE’s complete video interview with Emsley:

To watch more of theCUBE’s coverage of “Is Your IT Infrastructure Ready for the Age of AI?” event, here’s our complete event video playlist:

(* Disclosure: TheCUBE is a paid media partner for the “Is Your IT Infrastructure Ready for the Age of AI?” event. Neither Dell Technologies Inc., the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU