In its 18th annual GTC conference last week, Nvidia Corp. not surprisingly aimed to get audience members’ hearts pumping for what’s next — which obviously is the rapid evolution of artificial intelligence.
Nvidia co-founder and Chief Executive Jensen Huang once again played his traditional keynote speaker role. As usual, he was dressed in all black, including a leather jacket. On Tuesday, Huang held court without a script for more than two hours, introducing Nvidia’s upcoming products.
It’s all about AI
AI adoption is occurring rapidly across many industries. Businesses invest money and effort to show customers, partners and shareholders how innovative they are by leveraging AI. The success of this AI explosion depends on fast, reliable, innovative technologies. So, it makes sense that’s where Nvidia is focusing. Given the shockwaves created by January’s introduction of the DeepSeek-R1 LLM from the Chinese AI company of the same name, Huang eagerly shared all that Nvidia and its increasingly powerful, but expensive, graphics processing units, chip systems and AI-powered products can do.
Working in his customary rapid-fire presentation mode, Huang walked through a broad overview of industry trends and highlighted several recent and upcoming innovations from Nvidia. Here are some of the key announcements:
New chips for building and deploying AI models. The Blackwell Ultra family of chips is expected to ship later this year, and Vera Rubin, the company’s next-gen GPUs named for the astronomer who discovered dark matter, are scheduled to ship next year. Huang said Nvidia’s follow-on chip architecture will be named after physicist Richard Feynman and is expected to ship in 2028. Nvidia is in regular cadence in delivering the “next big thing” in GPUs, which is great for hyperscalers, but as the use of AI broadens to enterprises, it will be interesting to see if they can keep up with Nvidia. I’ve talked to many chief information officers who aren’t sure when to pull the trigger on AI projects going into production as models and infrastructure keep evolving at a pace never seen in computing before. Go now and start reaping the rewards or wait six months and perhaps get exponential benefits. It’s a tough call, but my advice is to go now as waiting just puts companies further behind. However, as a former CIO, I get the concern of moving now and risk being obsolete in a year.
Nvidia Dynamo, which Huang called “essentially the operating system of an AI factory,” is AI inference software for serving reasoning models at large scale. Dynamo is fully open-source “insanely complicated” software built specifically for reasoning inference and accelerating across an entire data center. “The application is not enterprise IT; it’s agents. And the operating system is not something like VMware — it’s something like Dynamo. And this operating system is running on top of not a data center but on top of an AI factory.” Dynamo is a great example of Nvidia’s “full stack” approach to AI. Though the company makes great GPUs, so do other companies. What has set Nvidia apart is its focus on the rest of the stack, including software.
DGX Spark is touted as the world’s smallest AI supercomputer, and DGX Station, which he called “the computer of the age of AI,” will bring data-center-level performance to desktops for AI development. Both DGX computers will run on Blackwell chips. Reservations for DJX Spark systems opened on March 18. DGX Station is expected to be available from Nvidia manufacturing partners such as ASUS, BOXX, Dell, HP, Lambda and Supermicro later this year. It’s important to note that DGX Spark isn’t designed for gamers but for AI practitioners. Typically, this audience would use a DGX Station as a desktop, which can run at $100,000 or so. DGX Spark is being offered starting at $3,999, a great option for heavy AI workers.
On the robotics front, which is part of the physical AI wave that’s coming, Huang announced partnerships with Google DeepMind and Disney Research. The partners will work to “create a physics engine designed for very fine-grained rigid and soft robotic bodies, designed for being able to train tactile feedback and fine motor skills and actuator controls.” Huang stated that the engine must also be GPU-accelerated so virtual worlds can live in super real time and train these AI models incredibly fast. “And we need it to be integrated harmoniously into a framework that is used by these roboticists all over the world.” A Star Wars-like walking robot called Blue, which has two Nvidia computers inside, joined Huang onstage to provide a taste of what is to come. He also said the Nvidia Isaac GROOT N1 Humanoid Foundation Model is now open source. Robots in the workplace, or “co-bots,” are coming and will perform many of the dangerous or repetitive tasks people do today. From a technology perspective, many of these will be connected over 5G, leading to an excellent opportunity for mobile operators to leverage the AI wave. The societal impact of this will be interesting to watch. Much of the fear around AI is the technology being used to replace people. During his keynote, Huang predicted, “By the end of this decade, the world is going to be at least 50 million workers short,” which is counter to traditional thinking. Since robots can do many of the dangerous and menial jobs people do today, will we really be 50 million people short? Hard to tell, but robots will be ready to fill the gap if required.
Shifting gears to automotive, General Motors has partnered with Nvidia to build its future self-driving car fleet. “The time for autonomous vehicles has arrived, and we’re looking forward to building, with GM, AI in all three areas — AI for manufacturing so they can revolutionize the way they manufacture” he said. “AI for enterprise, so they can revolutionize the way they design and simulate cars, and AI for in the car.” He also introduced Nvidia Halos, a chip-to-deployment AV Safety System. He said he believes Nvidia is the first company in the world to have every line of code — 7 million lines of code — safety-assessed. He added that the company’s “chip, system, our system software and our algorithms are safety-assessed by third parties that crawl through every line of code” to ensure it’s designed for “diversity, transparency and explainability.” At CES the innovation around self-driving was everywhere. If one rolls back the clock about a decade, many industry watchers predicted we would have fully autonomous vehicles by now, but they are still few and far between. AI in cars has come a long way and they are safer and smarter but the barrier to fully autonomous was higher than many expected, but I believe we are right around the corner.
Quantum day interesting but left big questions unanswered. The Thursday of GTC featured the first-ever quantum day, where Huang interacted with 18 executives from quantum companies over three panels. The event was certainly interesting as it introduced the audience to companies such as D-Wave, IonQ and Alice & Bob, but it did not answer the two questions on everyone’s mind. What are the use cases for quantum and when will it arrive? During the session, Huang did announce Nvidia plans to open a quantum research facility, scheduled to open later in 2025. He also suggested that 2026 quantum day would feature more use cases. When I’ve asked industry peers about quantum, I hear timelines anywhere from five years to 30 years. I believe it’s closer to five than 30 as once we see some use cases, that will “prime the pump,” and we should see a “rising tide,” much like we did with AI.
GTC 2025 is now in the rear-view mirror and while there was no “big bang” type of announcement, there was steady progress across the board to a world where AI is as common as the internet. This should be thought of as a GTC that lets companies digest how to use AI instead of trying to understand what the next big thing is. The breadth and depth of AI today shows it’s becoming democratized, which will lead to greater adoption — good for Nvidia but also the massive ecosystem of companies that now play in AI.
Zeus Kerravala is a principal analyst at ZK Research, a division of Kerravala Consulting. He wrote this article for SiliconANGLE.
Photo: Robert Hof/SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy