UPDATED 22:40 EDT / MARCH 18 2025

AI

The key takeaways from Nvidia CEO Jensen Huang’s GTC keynote

Nvidia Corp. Chief Executive Jensen Huang took the stage today at Nvidia Corp.’s annual GTC keynote with his characteristic blend of technical mastery, visionary ambition and a touch of humor.

This year’s event underscored not just how fast the AI revolution is moving but how Nvidia continues to redefine computing itself. The keynote by Huang (pictured) was a master class in scaling AI, pushing the limits of hardware, and building the future of artificial intelligence-driven enterprises.

Key takeaways

  • Blackwell architecture represents the biggest leap in AI computing.
  • AI Factories will replace traditional data centers.
  • Nvidia Dynamo is the AI operating system for large-scale inference.
  • Enterprise AI adoption accelerating → Full-stack AI solutions, AI-powered workforce.
  • Networking and power efficiency are key scaling challenges.
  • AI-driven robotics and digital twins will drive the next wave of automation.

Blackwell in full production: computation demand and reasoning

At the heart of the keynote was the Blackwell system, the latest in Nvidia’s graphics processing unit evolution. This isn’t just another generational upgrade; it represents the most extreme scale-up of AI computing ever attempted. With the Grace Blackwell NVLink72 rack, Nvidia has built an architecture that brings inference at scale to new heights. The numbers alone are staggering:

  • 1-exaflop computing in a single rack
  • 600,000 components per data center rack
  • 120-kilowatt fully liquid-cooled infrastructure

The shift from air-cooled to liquid-cooled computing is a necessary adaptation to manage power and efficiency demands. This is not incremental innovation; it’s a wholesale reinvention of AI computing infrastructure.

Blackwell NVL with Dynamo: 40X better performance and scale-out

Huang emphasized that AI inference at scale is extreme computing, with an unprecedented demand for FLOPS, memory and processing power. Nvidia introduced Dynamo, an AI-optimized operating system that enables Blackwell NVL systems to achieve 40 times better performance. Dynamo represents a breakthrough in delivering an operating system software to run on the AI Factory engineered hardware systems. This should unleash the agentic wave of applications and new levels of intelligence.

Dynamo manages three key processes:

  • Pre-fill phase: Efficiently reading vast amounts of information.
  • Key-value storage: Optimizing memory access for inference.
  • Decode Phase: The takeaway? Dynamo and Blackwell together redefine AI performance, making large-scale inference more efficient and scalable than ever before.

Upcoming AI infrastructure product roadmap: cloud, enterprise and robotics

Jensen made a point to emphasise the importance of Nvidia laying out a predictable annual rhythm for AI infrastructure product and technology evolution, covering cloud, enterprise computing, and robotics.

The roadmap includes:

  • Now: Full-scale production of Blackwell GPUs.
  • 2H 2025: Blackwell Ultra NVL72.
  • 2H 2026: Vera Rubin NVL144 (named after the scientist who discovered dark matter).
  • 2H 2027: Rubin Ultra NVL576 (600kW per rack!).

Each milestone is an exponential leap forward, resetting industry KPIs for AI efficiency, power consumption, and compute scale.

Scaling the AI network: Spectrum-X and silicon photonics

Networking is the next bottleneck, and Nvidia is tackling this head-on:

  • Spectrum-X: A “supercharged” Ethernet for AI factories.
  • Silicon photonics: 1.6 terabit per second bandwidth, enabling AI at massive scales.
  • Micro Mirror technology: A new Nvidia-developed transceiver that reduces power consumption for massive GPU networks.

As Huang pointed out, datacenters are like stadiums, requiring short-range, high-bandwidth interconnects for intra-factory communication, and long-range optical solutions for AI cloud scale.

Enterprise AI: Redefining the digital workforce

Huang predicted that AI will reshape the entire computing stack, from processors to applications. AI agents will become integral to every business process, and Nvidia is building the infrastructure to support them.

  • 10 billion digital AI agent workers are coming.
  • 100% of Nvidia’s operations will be AI-assisted by year-end.
  • AI-powered coding will replace traditional programming.

This isn’t just about replacing humans; it’s about enabling enterprises to scale intelligence like never before.

The shift from data centers to AI factories

Nvidia’s ultimate vision is to move from traditional datacenters to AI factories — self-contained, ultra-high-performance computing environments designed to generate AI intelligence at scale. This transformation redefines cloud infrastructure and makes AI an industrial-scale production process.

Huang’s new punchline, “The more you buy, the more revenue you get,” was a comedic yet poignant reminder that AI’s value is directly tied to scale. Nvidia is positioning itself as the architect of this new era, where investing in AI computing power isn’t an option — it’s an economic necessity.

Storage must be completely reinvented to support AI-driven workloads, shifting towards semantic-based retrieval systems that enable smarter, more efficient data access. This transformation will define the future of enterprise storage, ensuring seamless integration with AI and next-generation computing architectures. Look for key ecosystem partners like Dell Technologies, Hewlett Packard Enterprise and others to step up with new products and solutions for the new AI infrastructure wave. Michael Dell was highlighted by Jensen in showcasing Dell as having a complete Nvidia-enabled set of AI products and systems.

Beyond AI: reinventing robotics

Finally, Nvidia is applying its AI leadership to robotics. Huang outlined a future where general-purpose robots will be trained in virtual environments using synthetic data, reinforcement learning and digital twins before being deployed in the real world. This marks the beginning of AI-driven automation at an industrial scale.

Final takeaways

Huang’s GTC keynote wasn’t just about the next wave of GPUs — it was about redefining the entire computing industry. The shift from datacenters to AI factories, from programming to AI agents, and from traditional networking to AI-optimized interconnects, positions Nvidia at the forefront of the AI industrial revolution.

The Nvidia CEO has set the tone for the next decade: AI isn’t just an application — it’s the future of computing itself. As we have been saying on theCUBE Pod, AI infrastructure has to deliver the speeds and feeds and scale to open the floodgates for innovation in the agentic and new AI applications that sit on top.

Photo: John Furrier/SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU