On theCUBE Pod: A look at Nvidia’s latest announcements as the battle over generative AI rages on
This week, Nvidia Corp. was in the news on multiple occasions, announcing the upgraded version of its GH200 Grace Hopper chip, intended to enable companies to run more sophisticated language models. Nvidia also announced a slew of other new generative artificial intelligence products.
The company’s shares are up 180% this year, and it’s now valued at over $1 trillion U.S. as the fifth-most valuable company in the United States. Nvidia’s advanced graphics computer interfaces announcements and their potential implications on the world of AI become all the more important in a world full of “AI-washing,” noted theCUBE industry analyst John Furrier (pictured, left).
“They say, ‘Oh, we have AI,’ and they weren’t even considering machine learning and AI a few years ago,” Furrier said on the latest episode of theCUBE podcast. “But there are companies like Nvidia who saw the future; they invested heavily. And we know this for a fact. I remember back in 2013-15, in that timeframe, we were interviewing and talking to those folks.”
Nvidia has been relentless in its pursuit of AI hardware, in Furrier’s view, but he noted that theCUBE has long made the point that hardware still matters. Given that trend, the fashion of outcomes is shifting to the question of what one’s infrastructure looks like. It’s also important to question what that stack looks like at the physical infrastructure layer, especially in the modern era, which involves high-velocity data and AI.
“The conversations are changing from marketing speak to legit architecture that if you don’t have a strong architecture in your enterprise, you can’t successfully build out and transform your organizations,” Furrier said. “This is going to cause a lot of friction in the marketplace, because people are going to look at this and say, ‘I’m on the wrong side of this trend.’ And it’s not easy to get back on the right side.”
When talking about the need to move fast, what came to mind for theCUBE industry analyst Dave Vellante (right) was an internal memo from Google LLC in which a senior engineer warned that Google and ChatGPT developer OpenAI LP face increasing competition from open-source developers in the field of generative AI.
“When you unpack that thing, it’s like basically saying open source is where all the action is. Of course, we know that,” Vellante said.
The battle rages on
When considering the battle of the large language models — the proprietary LLMs versus the open-source ones — it’s important to remember that there are dozens of open-source LLMs that have been released, Vellante noted. That’s leading to exciting times in what he referred to as the “Wild West.”
“One of the areas that it’s obvious, whether we can talk to Dell or HPE or a number of the other folks at IBM, in part, is the discussion about where are large language models going to be run?” Vellante said. “Is it going to be run on-prem? Or are they going to be done in the cloud?”
On one hand, the cloud has developer tools, new features and capabilities in terms of fencing off LLMs and not letting LLM vendors have access to customer data, among other features. At the same time, it’s also a can of worms of risk with legal, compliance, IP leakage and copyright, according to Vellante.
“It’s really, really complicated. So point being, when you look at the data, a lot of this stuff, it’s right down the middle,” Vellante said. “When you look at the ETR data, it’s 50-50 going to be on-prem and in the cloud in terms of where people want to run these models.”
A few weeks ago at the Supercloud 3 event, various chief executive officers in the security world talked about these issues, and The Wall Street Journal had an article on Thursday focused on how AI was generating security risks faster than companies could keep up. The conversation is now catching up as these technologies are moving mainstream, according to Furrier.
“The mandate for generative AI is clear. And everyone’s talking about it,” Furrier said. “Andy Jassy, he said in his earnings report you’ll have AI in every single product. And there’s a lot of stuff going on in building out new stuff. The top-down mandate’s clear. Everyone sees it.”
Watch the full theCUBE Podcast below to find out why these industry pros were also mentioned:
Zeus Kerravala, founder and principal analyst at ZK Research
Jason Calacanis, internet entrepreneur, angel investor and author
Sam Bankman-Fried, former CEO of FTX
Jensen Huang, founder and CEO of Nvidia
Charles Fitzgerald, consultative strategist and investor
Andy Jassy, president and CEO of Amazon
Steve Jobs, co-founder and former CEO and chairman of Apple
Bill Gurley, general partner at Benchmark
Josh Wolfe, founder and managing director at Lux Capital
Joe Biden, 46th president of the United States
Janet Yellen, U.S. Secretary of the Treasury
Jay Chaudhry, founder, chairman and CEO of Zscaler
Hock Tan, president and CEO of Broadcom
Charlie Kawwas, president at Broadcom
Lina Khan, chair of the Federal Trade Commission
Napoleon Hill, writer
Mark Zuckerberg, CEO of Meta Platforms
Don’t miss out on the latest episodes of “theCUBE Pod.” Join us by subscribing to our RSS feed. You can also listen to us on Apple Podcasts or on Spotify. And for those who prefer to watch, check out our YouTube playlist.
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU