INFRA
INFRA
INFRA
Startup Nscale Global Holdings Ltd. will build four artificial intelligence data centers for Microsoft Corp. as part of a new contract announced today.
The facilities are expected to host about 200,000 graphics processing units. According to CNBC and the Financial Times, the contract is worth up to $24 billion.
Nscale is a London-based data center builder that spun out of Australian crypto mining company Arkon Energy Pty last year. It operates an AI facility (pictured) in Norway that is powered by 30 megawatts of hydroelectric power. Over the past two months, Nscale has raised more than $1.7 billion from Nvidia Corp. and other investors to grow its data center footprint.
The company’s new collaboration with Microsoft will see it launch data centers in four countries. Nscale plans to start building the first facility in the Portuguese town of Sines early next year. At full capacity, the data center will host about 12,600 GPUs.
Nscale will start building a second, significantly larger facility in Texas during the third quarter of 2026. The campus will host about 104,000 GPUs, or nearly ten times more than the Sines site.
The Texas data center will open its doors with an initial capacity of 240 megawatts. The company plans to increase that number to 1.2 gigawatts over time. Additionally, Microsoft will receive the option to add another 700 megawatts of AI capacity from the first quarter of 2027 forward.
The remaining 75,000 GPUs that Nscale plans to operate for Microsoft will be deployed as part of two previously announced data center projects. According to the company, the first facility will be located in the U.K. Nscale plans to build the second site a few hour’s drive from its existing data center in Norway.
All four facilities will use Nvidia’s top of the line Blackwell Ultra graphics card. The chip can provide 15 petaflops of inference performance, a 50% increase over its predecessor. Nvidia is promising an even bigger speedup for attention layers, the software components that language models use to identify the most important details in user prompts.
The Blackwell Ultra comprises 160 compute modules called SMs. Each SM, in turn, includes 132 cores. Four of those cores are optimized to run small units of data such as FP6 and FP8 numbers, while the others support a broader range of data types. Each SM includes 256 kilobytes of integrated memory optimized to store AI output.
Nscale plans to deploy the Blackwell Ultra as part of GB300 NVL72 appliances. The Nvidia-developed systems each contain 72 Blackwell Ultra chips, 36 central processing units and networking equipment. They use liquid cooling to dissipate heat.
The GB300 NVL72 has also been adopted by CoreWeave Inc., a Nasdaq-traded Nscale competitor. The company operates a public cloud platform optimized to run AI workloads. Last month, CoreWeave inked two multibillion-dollar data center deals with Meta Platforms Inc. and OpenAI.
Nscale Chief Executive Josh Payne told the Financial Times today that the company plans to go public. According to the paper, the data center builder could list its shares in the second half of 2026.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.