UPDATED 16:14 EDT / FEBRUARY 16 2024

AI

Anthropic to roll out new features for combating election misinformation

Anthropic PBC, one of OpenAI’s best-funded rivals in the large language model market, today previewed new capabilities designed to combat election misinformation. 

The company also detailed a number of other initiatives it’s pursuing to prevent the misuse of its services in political contexts. Separately, Anthropic today joined more than a dozen other tech firms in signing an accord designed to tackle election-related deepfakes. Rival OpenAI is also among the signatories. 

San Francisco-based Anthropic offers a ChatGPT-like chatbot called Claude. It can process prompts containing up to 500 pages’ worth of text and perform actions in external systems such as databases. Anthropic says that the newest version of the model, which rolled out in late December, generates 30% fewer incorrect answers than its predecessors.

In the coming weeks, the company plans to update Claude with a “classifier and rules engine” designed to identify election-related questions. When the software receives such a prompt from U.S.-based users, it will redirect them to the voting information website TurboVote. The website is run by the nonpartisan organization Democracy Works.

Anthropic said that it will use data gleaned from the rollout to inform the release of similar features in other countries. Those features are expected to become available over the coming months.

“Our model is not trained frequently enough to provide real-time information about specific elections,” Anthropic detailed. “For this reason, we proactively guide users away from our systems when they ask questions on topics where hallucinations would be unacceptable, such as election-related queries.”

In the blog post detailing the planned updates, Anthropic also provided a glimpse into its other election-related initiatives. The company says it has been conducting red team exercises since last year to determine if its AI systems can be used to generate election misinformation. To support the effort, Anthropic engineers built a set of evaluations for assessing metrics such as how consistently its models refuse to answer harmful queries.

Separately, Anthropic and 19 other tech firms today signed a pledge designed to tackle political deepfakes. The document is known as the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. The signatories will take steps to combat AI-generated videos and other multimedia files that “fake or alter the appearance, voice or actions of political candidates, election officials and other key stakeholders.”

Alongside Anthropic, the list of signatories includes OpenAI and other venture-backed generative AI providers. Several of the tech industry’s largest players are participating as well. Amazon.com Inc., Microsoft Corp., Google LLC, Meta Platforms Inc. and IBM Corp. are among the signatories along with chip designer Arm Holdings plc. 

Image: Anthropic

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU