UPDATED 19:45 EDT / MAY 26 2025

INFRA

Report: Nvidia racing to develop new, scaled-down Blackwell GPUs for China

Nvidia Corp. isn’t giving up on the Chinese market, and is instead racing to develop a new, lower-powered artificial intelligence chip that will be sold at a cheaper price than the now-restricted H20 model.

The new graphics processing unit will be considered as part of Nvidia’s latest generation of Blackwell processors, but will be significantly less powerful than the variants destined for western markets. According to Reuters, which first reported the news today, it will be priced at around $6,500 to $8,000, well below the $12,000 price tag of the H20 chipset, which is based on an older architecture.

In April, Nvidia was slapped with fresh restrictions that prohibited the export of the H20 GPU to China. That chip, based on the company’s Hopper architecture, is comparable to the H100 and H200 products that are sold to U.S. companies, but has less bandwidth and slower interconnection speeds. It was designed to meet earlier restrictions on chip exports to China, but Trump administration officials seemingly decided that even the scaled-down H20 was still too powerful to be exported to its biggest rival in the AI industry.

The decision came just weeks after it was revealed that DeepSeek Ltd., the startup that developed a foundation model with reasoning capabilities on a par with the best models from OpenAI and other U.S. firms, did so using clusters of H20 chips.

The lower price of Nvidia’s new chip is said to reflect its lower specifications and simpler manufacturing requirements, Reuters reported, citing three anonymous sources familiar with the company’s plans. It will be based on the Nvidia RTX Pro 6000D, which is a server-class Blackwell GPU. It will be equipped with conventional GDDR7 memory, as opposed to the high-bandwidth memory found in other Blackwell chips. It could enter production as early as June, the sources added.

The new chip would not be manufactured using Taiwan Semiconductor Manufacturing Co.’s most advanced Chip-on-Wafer-on-Substrate packaging technology, but instead use an older process.

A spokesperson for Nvidia declined to comment on the new chip, but said the company was still evaluating its “limited options” with regard to the Chinese market. “Until we settle on a new product design and receive approval from the U.S. government, we are effectively foreclosed from China’s $50 billion data center market,” the spokesperson said.

Despite the sanctions, China remains a key market for Nvidia, accounting for 13% of its annual revenue in its previous financial year. It has been hit with sanctions prohibiting it from selling its most advanced chips to Chinese companies twice before, and each time has responded by creating a scaled-down version of its technology for that market.

Nvidia’s biggest competitor in the Chinese GPU market is Huawei Technologies Co. Ltd., which produces the Ascend 910B chipset. Nori Chiou, a semiconductor industry analyst at White Oak Capital Partners, told Reuters that Huawei is expected to be able to match the performance of Nvidia’s scaled-down chips within the next one-to-two years. However, Nvidia still has one advantage over its rival, thanks to its chips’ ability to integrate AI clusters with the CUDA platform.

CUDA is the programming architecture that’s used to optimize applications and AI models for Nvidia’s GPUs, and its widespread popularity means that developers are keen to stick with it.

The export ban implemented in April forced Nvidia to write off more than $5.5 billion in inventory, and Chief Executive Jensen Huang admitted last week that it also walked away from more than $15 billion in sales.

According to Huang, Nvidia initially considered developing an even more scaled-down version of the H20 chip in response to the latest restrictions, but soon realized that the older Hopper architecture is unable to accommodate further modifications.

The latest restrictions introduced new limits on GPU memory bandwidth, which is a metric that measures how fast data can be sent between the actual processor and the onboard memory system. Higher bandwidth is vital for AI workloads, as they involve processing extensive amounts of data.

Photo: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.