

Minutes before his keynote speech at GTC on Tuesday, Nvidia Corp. Chief Executive Jensen Huang surprised over 20,000 attendees in San Jose by suddenly appearing on-stage with a T-shirt launcher in his hands. He proceeded to fire rolled-up shirts into the crowd in the SAP Center, a fitting image for the speech that followed, which took dead aim at redefining the entire computing stack and enabling it for AI.
Nvidia’s vision, as outlined by Huang (pictured) and his executives this week, is of a world in which AI must be scaled up, with ever-more powerful components such as processors and networking, before it can be scaled out across more and more servers worldwide. That will require advanced processing capability that pushes the limits of hardware while building AI-driven enterprises. It demands money, time and planning, all a part of the long game that Nvidia is playing as it seeks to maintain its leadership role in defining the next computer revolution.
“We’re building AI factories and AI infrastructure,” Huang said during his keynote. “It’s going to take years of planning. It isn’t like buying a laptop. There’s no replacement for scaling up before you scale out.”
A key ingredient in Nvidia’s “scale up” strategy is Blackwell Ultra NVL72, the latest iteration of Nvidia’s GPU platform. Huang unveiled a significant upgrade to its original Blackwell architecture, designed to handle the next generation of AI reasoning and agent-driven workloads.
Blackwell Ultra will include a staggering 600,000 components per data center rack and 120 kilowatts of fully liquid-cooled infrastructure. “We have a 1-exaflops computer in one rack,” Huang noted. “This is the most extreme scale-up the world has ever done.”
Huang also offered attendees a glimpse into the future by providing a roadmap for additional scale-up architecture. While Blackwell Ultra aggregates the power of 72 GPUs, Nvidia’s next-generation Rubin will offer 144 GPUs by this time next year with an expansion to 576 GPUs or 600 kilowatts per rack in 2027.
Despite the performance boost for Blackwell, the network could still be a bottleneck. The release on Tuesday of Spectrum-X Ethernet and Quantum-X800 InfiniBand networking systems can provide up to 800 gigabytes per second of data throughput for each of the 72 Blackwell GPUs.
Nvidia also announced Dynamo, open-source inferencing software designed to increase throughput and decrease the cost of generating large language model tokens for AI. By orchestrating inference communication across what is expected to be thousands of GPUs, Nvidia intends to drive efficiency as AI agents and other use cases ramp up.
“Dynamo is essentially the operating system of an AI factory,” Huang said in his keynote.
Huang’s declaration of operational support highlights a key element in his firm’s enterprise strategy and a clear theme at GTC this week. The ultimate goal is for enterprises to move from traditional data centers to AI factories, high-performance computing environments that will generate AI at scale. This vision of AI as an industrial-scale production process is driving Nvidia’s evolving business model.
Nvidia itself has become an AI factory, according to Huang. In a briefing for the media the day after his keynote, Nvidia’s CEO outlined how his company has transitioned from processor maker to a critical revenue driver for its diverse customer base.
Nvidia CEO Jensen Huang plays concertmaster with a T-shirt gun. (Photo: Mark Albertson/SiliconANGLE)
“We’re not building chips anymore, those were the good old days,” Huang said. “We are an AI factory now. A factory helps customers make money.”
In many ways, Nvidia now finds itself in an unusual position within the tech industry. It has no reluctance to communicate its product plan years in advance, and it appears wholly unconcerned about the potential for competition with the very customers its supplies.
The company’s release of its roadmap for Blackwell and Rubin, along with planned enhancements in several other key product areas, reflected a level of transparency that Huang pointedly noted in his appearance before the assembled press on Wednesday.
“We’re the first tech company in history that announced four generations of technology at one time,” Huang said. “That’s like a company announcing the next four smartphones. Now everybody else can plan.”
That’s an unusual but crucial gambit as Nvidia seeks to retain its big advantage in driving AI. Nvidia’s bold moves in divulging its roadmap reflect a market philosophy that its “big tent” approach will avoid potential conflicts of interest with those who purchase its systems. Huang noted that Nvidia deliberately avoids being a “solutions company,” and by leaving the last half of value creation to its customers, it can work side-by-side in partnership with any client to build AI platforms that deliver results.
“We have no trouble taking the core tech that we create and allowing them to integrate it in their core solution and take it to market,” Huang said. “We became the only AI company in the world that works with every AI company in the world. We have no trouble working with anyone in their way. We want to enable the ecosystem. That’s why every company is here.”
Indeed, a stroll through the GTC exhibit hall in San Jose this week provided an opportunity to interact with representatives from Amazon Web Services Inc., Microsoft Corp., Google Cloud, Oracle Corp. and Hewlett Packard Enterprise Co., among others. Attendees had an opportunity to rub shoulders with longtime tech luminaries Michael Dell of Dell Technologies Inc. and ServiceNow Inc. CEO Bill McDermott as they strolled the convention center halls.
When the Nvidia ecosystem convened in San Jose last year, SiliconANGLE industry analyst Dave Vellante described the event as “the most significant in terms of its reach, vision, ecosystem impact and broad-based recognition that the AI era will permanently change the world.” One year later, it would be hard to argue that AI’s impact has lessened or that GTC has become any less significant.
“Last year GTC was described as the Woodstock of AI,” Huang said during his opening-day keynote. “This year it’s being described as the Super Bowl of AI. We have now reached the tipping point of accelerated computing.”
THANK YOU