UPDATED 20:00 EDT / MARCH 17 2026

AI

Nvidia CEO Jensen Huang reveals chip sales in China are about to restart

After surprising the tech world on Monday with a prediction that his company would see at least $1 trillion in chip orders through 2027, Nvidia Chief Executive Jensen Huang spent today discussing the global dynamics and market forces that could make this a reality.

Huang met with the media for nearly two hours today during the GTC gathering in San Jose, answering a wide range of questions about Nvidia’s announcements this week and the company’s vision for AI. He took several questions about China, an area of particular interest given the potentially lucrative nature of the country’s future business for the chip giant.

Previous U.S. restrictions on the sale of Nvidia’s H200 processor in China have limited the firm’s ability to sell to customers there. When asked about China in a similar session with the media during CES in January, Huang said that he was waiting to see the purchase orders. That situation has apparently changed.

“We have received purchase orders from many customers and we’re in the process of restarting our manufacturing,” Huang said on Tuesday. “Our supply chain is getting fired up.”

Supply chain risk

While the restart of its business in China was good news for the company, there remain questions about Taiwan and the prospect of China invading the territory and cutting off chip exports. Nvidia is expected to become the largest customer for the Taiwan Semiconductor Manufacturing Co. later this year, and any disruption in the supply chain for its AI chips could have a significant impact on the company’s growth.

“My only hope is that we can all work together, stay in peace, look at the big picture, and stay calm,” Huang said. “I am 100% certain that the world will depend on Taiwan for a very long time.”

A disruption in its supply chain from Asia could affect another key part of Nvidia’s future. This week’s announcements included the debut of the Groq 3 language processing unit or LPU, technology that will play a central role in the firm’s AI inferencing strategy.

Made by the foundry division of Samsung Electronics in South Korea, Groq is an integral piece of Nvidia’s focus on AI inferencing for multi-agent workloads. Pairing of the Groq 3 LPX with the new Vera Rubin NVL72 rack is designed to maximize efficiency across power, memory and compute. It could also maximize the revenue picture for Nvidia, according to Huang, who described how Groq’s 25% role in its processing and storage solutions could add a similar percentage to its forecast for chip revenues.

“Theoretically, that $1 trillion could become $1.25 trillion,” Huang told the media. “The entire storage industry is going to follow us.”

Growth expected for robotics

Nvidia’s role as a central player in autonomous technology is also expected ultimately to contribute to its revenues. Nvidia’s latest releases included blueprints for AI training data generation to enable massive-scale processing for the AI models needed to drive the next generation of robots.

Although Nvidia’s automotive segment contributed only 1% to the firm’s total revenues in the past year, Huang expressed confidence that this picture will ultimately change.

“Most trillion-dollar businesses started at zero at one point,” Huang said, bringing up the example of its CUDA computing platform and programming model. “CUDA was zero percent of our business and 90% of our cost. It turned out that everything we did in the beginning cost us a lot of money and generated nothing.”

Nvidia’s announcements included an Open Physical AI Data Factory Blueprint to accelerate robotics, vision and autonomous vehicle deployment. The company’s bet on physical AI is grounded in a belief that customers will ultimately need the full spectrum of Nvidia’s compute architecture to achieve success.

“Nvidia’s autonomous vehicle business includes three computers,” Huang noted. “The total business is much larger than people think. Customers are buying one of these computers or all three computers from us.”

Future tech and AI ethics

Missing from this week’s GTC gathering has been much information about Feynman, Nvidia’s next-generation GPU microarchitecture that is planned for release in 2028. Although he briefly referenced Feynman during his keynote on Monday, Huang was asked to elaborate on the technology during his media briefing, and he had nothing further to add.

Huang was more forthcoming in response to questions about the ethics surrounding AI. The issue has been in the news recently over controversy surrounding Anthropic PBC’s refusal to allow U.S. government use of its technology for mass surveillance or to build autonomous weapons. OpenAI Group PBC subsequently signed a $200 million deal with the Pentagon after Anthropic was removed from a contract.

“AI shouldn’t break the law, AI should not promise functionality it does not have,” Huang said. “We need AI to do a lot of great things for us. We need AI agentic systems to be inside of security. I need superfast AI agents to go protect me.”

As SiliconANGLE’s analysts have noted, the AI industry is rapidly shifting from training models to inference, the process of running them to generate results, and competitive advantage will be gained by those that can control and optimize the data path. Nvidia has made the case this week that it intends to define that path, which is keeping Huang, the man at the center of this key transformation, a very busy man indeed.

“My experience with Nvidia today is it’s making me busier today that I was six months ago,” Huang said Huang, who waxed wistful about people a century ago having time drink lemonade on their porch. “My philosophy is, ‘Don’t get fired, don’t get bored and don’t die.’ But each one of those three are very high-risk.”

Photo: Robert Hof/SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.