Anthropic partners with AWS and Palantir to provide AI models to defense agencies
Generative artificial intelligence startup Anthropic PBC said today it’s joining with big data analytics service company Palantir Technologies Inc. and Amazon Web Services Inc. to provide its Claude AI model family to U.S. intelligence and defense agencies.
The company said the partnership will use Palantir’s data products to support government operations by processing vast amounts of data rapidly, with an eye to producing data-driven insights and identify patterns and trends quickly. It would also help review documents and prepare for operations in time-sensitive and critical situations.
The company’s Claude AI model became accessible through the Palatir Artificial Intelligence Platform via AWS earlier this month. Using Palantir’s AIP, customers can access Claude through SageMaker, a fully managed service provided by Amazon hosted through Palantir’s secure infrastructure. According to the companies, Palantir and Amazon are among the limited number of companies to receive the Defense Systems Agency Impact Level 6 accreditation.
Impact Level 6, or IL6, exists for high-level security classified data and information systems within the U.S. Department of Defense. It’s reserved for systems that contain data critical for national security and affects materials up to one level below “top secret.”
“Our partnership with Anthropic and AWS provides U.S. defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions,” said Palantir Chief Technology Officer Shyam Sankar.
Anthropic also stressed that the partnership will enable the responsible application of AI. The company recently launched its most powerful version of Claude 3.5 Sonnet, which runs at twice the speed of Claude 3 Opus, its biggest model.
The company is known for creating AI models designed to produce less harmful results using a concept the company calls “Constitutional AI.” This is a learning system for AI that imbues the model with a set of values that it should follow. The vision of this constitutional system is to make AI outputs less toxic or likely to become harmful by having another AI supervise its responses also revise its own based on those values.
This news comes as other AI firms have also begun to open their models to government entities. Meta Platforms Inc. recently announced that it would allow U.S. intelligence and defense contractors to use its open-source Llama AI model and OpenAI is reportedly seeking deals with U.S. defense firms.
According to Palantir, the newest Claude models have already seen broad adoption across multiple industries and have had a significant impact.
“For example, one leading American insurer automated a significant portion of their underwriting process with 78 AI agents powered by AIP and Claude, transforming a process that once took two weeks into one that could be done in three hours,” said Sankar. “We are now providing this same asymmetric AI advantage to the U.S. government and its allies.”
Image: Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU