

It was a special episode of theCUBE Podcast for theCUBE Research industry analysts Dave Vellante and John Furrier, as they recorded live from MWC Barcelona. The show was all about networking and silicon and involved exclusive coverage on theCUBE, including conversations from Antonio Neri, Michael Dell, Charlie Kawwas and Rami Rahim, among others.
Covering silicon chips has been a part of coverage at theCUBE for more than a decade. The importance of silicon has always been well understood, and this week saw several intriguing conversations take place.
“It was interesting to hear Charlie Kawwas talk about CMOS, and what a game changer it was,” said Vellante (pictured, left). I remember when IBM mainframes were running ECL — emitter-coupled logic — and they transferred into CMOS, which was much low power. They had to pay a performance penalty, but they knew they were on a new curve. They dominated that business. Of course, we know what happened with the PC revolution and Intel.”
Charlie Kawwas, as part of his interview, mentioned a pivotal part of Broadcom’s philosophy revolving around three core principles: openness, scale and low power. That leads to what has been an ongoing conversation around clustered systems, according to Furrier (right).
“The data center of the future is on-premise and edge. Smaller, faster, cheaper — Moore’s law,” he said. “It’s a cloud operation. Public cloud, on-premise and edge is distributed computing, operating as a cloud operation, meaning microservices, APIs and, now, AI, generative AI stack. OK, so that’s going to happen.”
Power is the new constraint, with no motherboard anymore, Furrier added. The question is, how much power and cooling can one generate in the data center?
“Then on the device, what’s the power consumption relative to generating that AI or application? The game is still the same; it’s just shifting distributedly,” he said. “We talked about it with Charlie. The metaphor you used, I brought in highly available versus high availability. That’s a storage concept.”
Highly available and high availability sound the same, but they’re different. Highly available means one has data right there, ready to go, while high availability means one goes and gets it, according to Furrier.
“High availability means I can get data. High availability means I’ve got an application that needs data,” he explained. “It’s available, meaning it’s on a server somewhere else — you get to go fetch it. Across the network, highly available means it’s on the device itself, low latency.”
When one talks about strategic warfare, any kind of drone activity has “the tactical edge,” according to Furrier. With consumers, it’s called edge.
“If you need data instantly, it’s got to be on the device,” Furrier said.
The talk of the tech stock world continues to be Nvidia Corp., which surged last week. On last week’s episode of theCUBE podcast, Vellante suggested Nvidia had been building a moat for well over a decade.
“We debated, the industry is debating about, and I think there’s a consensus. There’s going to be all these alternatives that come out,” he said. “AMD, Intel, things like LPU, very specialized processors. I have said, ‘Yeah, but none of them are going to have a monopoly like Nvidia.’”
But where Vellante said he may have been asleep at the wheel involves Broadcom Inc., which essentially has a monopoly-like business. The company has very, very high gross margins at almost 70%.
“Not really thinking about, and I missed it because I was thinking narrowly about Nvidia’s GPU competitors,” he said. “I think I would put Broadcom in the mix. They are the number two AI company by valuation. They’re an AI company because AI is everything now.”
Broadcom is perfectly situated for AI, even though Charlie Kawwas would say the company wasn’t trying to go after the AI curve and was instead just going after 10-year durable businesses, according to Vellante. The other point Vellante said he wanted to mention had to do with low latency edge and real-time data.
“In the moment, from a silicon standpoint, Tesla is the best example. This is a company that said, ‘We’re not going to use Mobileye,’ Intel’s chipset silicon. ‘We’re going to develop our own custom silicon using Arm,’” Vellante said. “Why did they do that? Because they wanted to get rid of LiDAR.”
Whether or not getting rid of LiDAR is the right thing or the safest thing is up for debate. But the company did save money.
“They saved roughly $2,000 a vehicle by using low-cost cameras and developing their own custom chipsets with highly customizable NPU code — that they develop themselves — so they could interpret what was going on and use low-cost cameras,” Vellante said.
Rami Rahim, CEO of Juniper Networks
Charlie Kawwas, president at Broadcom
David Floyer, analyst emeritus at theCUBE Research
Andy Jassy, president and CEO of Amazon
Jensen Huang, founder and CEO of Nvidia
Steve Ballmer, former CEO and president of Microsoft
Frank Slootman, chairman of the board of directors at Snowflake
Sridhar Ramaswamy, CEO of Snowflake
Michael Scarpelli, CFO of Snowflake
Jim Cramer, investment pro and media personality
John Donahoe, president and CEO at Nike, former CEO at ServiceNow
Joe Tucci, chairman and co-founder at Big Growth Partners
Don’t miss out on the latest episodes of “theCUBE Pod.” Join us by subscribing to our RSS feed. You can also listen to us on Apple Podcasts or on Spotify. And for those who prefer to watch, check out our YouTube playlist. Tune in now, and be part of the ongoing conversation.
THANK YOU