AI
AI
AI
Amazon.com Inc. plans to invest up to $25 billion in Anthropic PBC and provide the company with a significant amount of additional computing capacity.
The collaboration, which was announced today, builds on an existing partnership that stretches back three years. Amazon previously built an AI cluster called Project Rainier to host Anthropic’s internal workloads. It has also invested $8 billion in the artificial intelligence developer.
The newly expanded partnership will see Amazon provide the company with an additional $5 billion in funding. The cloud and retail giant plans to invest up to $20 billion more at an unspecified later date. Anthropic, for its part, has committed to spending over $100 billion on Amazon Web Services over the next decade. That component of the partnership will give the AI developer access to up to 5 gigawatts of computing capacity.
Anthropic disclosed today that it already uses more than 1 million of the cloud giant’s custom AWS Trainium2 chips. Many of those processors are deployed in the Project Rainier cluster that AWS has built for the company. The cluster, which ranks as one of the largest of its kind in the world, came online last year and is spread across multiple data centers.
Anthropic will gain access to an unspecified number of additional Trainium2 chips by the end of June. In the second half of the year, AWS will add even more computing capacity powered by Trainium2 and its newer, more performant Trainium3 chip. Anthropic says the upgrades will provide it with nearly 1 gigawatts of computing capacity.
Trainium3 made its debut in December. It includes eight cores that together provide double the performance of a Tranium2 chip. Developers can optionally combine multiple cores into one large, logical core called an LNC that can speed up some workloads.
AWS runs the Trainium3 in internally designed servers that can provide up to 362 petaflops of capacity when processing MXFP8 data. The 144 accelerators installed in each system exchange data via a switch called the NeuronSwitch-v1. It’s based on a custom design that provides double the bandwidth of AWS’ previous-generation network device.
The newly announced partnership also encompasses the Amazon unit’s upcoming Trainium4 chip. According to AWS, servers based on the processor will provide more than 2 exaflops of performance when processing FP4 data. Anthropic will have the option to use the even more advanced AI chips that are expected to succeed Trainium4 once they become available.
Anthropic will use the AI accelerators together with AWS Graviton central processing units. The newest chips in the series feature 96 cores that each include a 2-megabyte L2 cache. Anthropic plans to rent CPU capacity equal to tens of millions of Graviton cores.
In the go-to-market arena, the companies intend to bring the “full Anthropic-native Claude console” to AWS. The integration will enable customers to access the AI model series using the cloud provider’s billing, monitoring and account management features. That will avoid the need for duplicate administrative workflows.
“Our collaboration with Amazon will allow us to continue advancing AI research while delivering Claude to our customers, including the more than 100,000 building on AWS,” said Anthropic co-founder and Chief Executive Dario Amodei.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.