UPDATED 17:39 EST / NOVEMBER 01 2023

POLICY

28 countries sign Bletchley Declaration on AI safety

The U.S., the U.K., China and 25 other countries have signed a declaration stressing the need to address the potential risks posed by artificial intelligence.

The Bletchley Declaration, as the document is called, was announced during the high-profile Summit on AI Safety taking place today in London. The event is the first in a series of planned gatherings dedicated to AI risks and methods of addressing them. A second summit on the topic is set to be hosted by South Korea within six months, while another, similar event will take place in France about a year from now.

The declaration is an approximately 1,300-word document that outlines some of the risks that advanced AI models could pose and potential ways to address them. The 28 countries that have backed the declaration and the European Union, which is also a signatory, point out that AI is not only widely used today but is likely to become even more pervasive in the future. “This is therefore a unique moment to act and affirm the need for the safe development of AI,” the signatories state.

The declaration goes on to highlight a number of AI risks seen as particularly urgent. One is machine learning models’ potential to be used for the generation of deceptive content. The signatories also highlight that “particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks.”

The countries that backed the declaration have stated they plan to work together on addressing those risks. As part of the initiative, they intend to broaden existing AI safety collaborations as well as increase the number of participating countries.

The declaration outlines several key priorities for the effort. The participating countries will prioritize “identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding.” They will also seek to establish “respective risk-based policies across our countries” to address AI-related issues.

The effort will also see the participants “resolve to support an internationally inclusive network of scientific research on frontier AI safety.” Against the backdrop of the London event where the declaration was detailed, the White House announced the launch of a new research institute that will develop technical resources for detecting and mitigating AI risks. The U.K. earlier detailed plans for a similar institute.

The declaration also addresses the role of private companies, particularly those developing frontier AI models, in ensuring the technology is used safely. “We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems,” the declaration states.

“This week’s Bletchley Declaration – alongside the G7’s Hiroshima Process and domestic moves like the White House’s Executive Order – represent critical steps forward toward securing AI on a truly global scale,” said Siân John, chief technology officer at the security consultancy NCC Group. “We are particularly heartened to see commitments from the Bletchley signatories to ensure that the AI Safety Summit is not just a one-off event, but that participants will convene again next year in South Korea and France, ensuring continued international leadership. In doing so, it will be important to set clear and measurable targets that leaders can measure progress against.”

Still, it doesn’t cover everything of concern in AI. “The declaration doesn’t cover adversarial attacks on current models or adversarial attacks on systems which let AI have access to tools or plugins, which may introduce significant risk collectively even when the model itself isn’t capable of anything critically dangerous,” noted Joseph Thacker, a researcher at the software-as-a-service security firm AppOmni Inc.

Photo: UK Government/Flickr CC BY 2.0

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU