UPDATED 21:54 EST / APRIL 01 2024

AI

US and UK governments agree deal to collaborate on AI safety testing

The U.S. and U.K. governments have agreed to cooperate on artificial intelligence safety, signing a formal agreement that will see them collaborate on methods for testing AI models and assessing the risks they pose.

The agreement, signed today in Washington D.C. by U.K. Science Minister Michelle Donelan and U.S. Commerce Secretary Gina Raimondo, lays out how the partners will pool their technical expertise, information and talent on AI safety issues.

According to Reuters, the deal is the first bilateral agreement on AI safety in the world, and it comes at a time when various governments have been pushing hard to increase regulation in the AI industry, amid fears that the technology could be misused in various ways. Some of the risks include AI being used to stage damaging cyberattacks or design new bioweapons, while luminaries such as Elon Musk have warned that the technology might “kill us all.”

The deal will enable the U.K.’s new AI Safety Institute, which was established in November, and its U.S. equivalent, which has not yet been formed, to share their expertise and know-how via secondments of researchers from both countries. In addition, the institutes are also expected to work together to evaluate private AI models built by companies such as OpenAI and Google LLC.

Donelan told the Financial Times that the partnership is modeled on an existing one between the U.K.’s Government Communications Headquarters or GCHQ and the U.S. National Security Agency, which often collaborate on matters relating to intelligence and national security.

“The next year is when we’ve really got to act quickly because the next generation of [AI] models are coming out, which could be complete game-changers, and we don’t know the full capabilities that they will offer yet,” Donelan told the Financial Times. “The fact that the United States, a great AI powerhouse, is signing this agreement with us, the United Kingdom, speaks volumes for how we are leading the way on AI safety.”

Donelan said it’s important for the U.K. to partner with the American government, since many of the most important AI companies are based in the U.S. So its expertise is key to understanding the risks posed by AI. In addition, the U.S. government holds greater sway over those companies, so it’s better able to ensure those companies adhere to existing commitments they have made on AI safety.

Under Prime Minister Rishi Sunak, the U.K. has tried to take leading role in marshaling nations to come up with an effective international response to the growing power of AI models, which are sometimes referred to as “frontier AI.” It convened the world’s first ever AI Safety Summit and it has committed £100 million ($125 million) towards its new AI Safety Institute – an amount that dwarfs the somewhat meager $10 million commitment by the U.S. to its own AI Safety Institute.

Strangely, though, Donelan told the Financial Times that the U.K. government does not have any plans to regulate the development of AI more broadly, at least not in the near term.

The U.K.’s stance differs from many other nations and regions. The European Union last year rolled out its AI Act, which is considered to be the toughest regulatory regime regarding AI development in the world. U.S. President Joe Biden has issued an executive order targeting AI models that have the potential to threaten national security, ahead of further legislation that’s still in the pipeline. China has also issued guidelines on AI that aim to ensure that the technology does not sidestep its longstanding censorship policies.

The collaboration between the U.S. and U.K. is a welcome initiative, as safety around AI has generally taken a backseat until recently, said Andy Thurai, vice president and principal analyst at Constellation Research Inc. The EU’s AI Act was really the first major initiative to try and rein in AI developers, he said, defining four categories of AI models based on the level of risk they pose and outright banning certain functions, such as facial recognition and biometrics, in some specific situations.

According to Thurai, it would be much better for the U.S., U.K and EU to come together and collaborate on guidelines, as AI poses a global risk, so today’s initiative is a good start.

“But the truth is that not enough is being done. Initiatives such as C2PA, which aims to establish authenticity and provenance for AI-generated content, are worthwhile and I would love to see more,” Thurai said. “Unfortunately, few people are willing to do the right thing and fund such projects to create truly responsible AI. Instead, we’re in an AI arms race and everyone involved is racing to try and build bigger, better and faster models that outperform the competition. So it will be a while before we see AI safety gaining any real momentum.”

At least, it’s likely that the safety tests that will be developed by the U.S. and the U.K. will play a role in helping legislators and technology executives to mitigate the risks posed by fast-evolving AI systems, and some of the leading players in AI appear willing to cooperate. Earlier this year, Google, OpenAI and Anthropic PBC all published plans outlining how the safety tests being designed will inform their future product development strategies.

Image: Freepik

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU