UPDATED 17:10 EDT / SEPTEMBER 05 2024

AI

US, UK, EU and others sign landmark AI safety treaty

More than a dozen countries have signed a treaty designed to ensure that artificial intelligence models are used in a safe manner.

The development was announced today at an event in Vilnius, Lithuania. The treaty in question is known as the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. It’s the fruit of a four-year initiative that involved dozens of experts.

The document is the first international, legally binding treaty designed to ensure that AI systems are used in a manner consistent with human rights, democracy and the rule of law. At today’s event in Vilnius, the treaty was officially opened for signature. It has so far been signed by the U.S., the UK and the European Union as well as Andorra, Georgia, Iceland, Israel, Norway, the Republic of Moldova and San Marino.

The treaty outlines a set of principles that “activities within the lifecycle of AI systems” must uphold. It lists human dignity and individual autonomy, equality and nondiscrimination, respect for privacy and personal data protection, transparency and oversight, accountability and responsibility, reliability and safe innovation. The treaty also specifies a set of steps that signatories should take to ensure that AI projects adhere to those principles. 

The treaty specifies that countries should perform assessments to map out how AI systems may impact human rights, democracy and the rule of law. If a signatory identifies potential risks, it’s expected to take steps to mitigate them. Additionally, signatories must provide a way for authorities to ban harmful applications of AI.

The treaty also lists a number of other approaches to ensuring that neural networks aren’t misused. In one section, the document states that signatories should provide a way for individuals to challenge decisions made using an AI system or “based substantially on it.” If a person requires more information about the AI system in question or the way it’s used to file a challenge, officials must share relevant data. 

Transparency is another focus of the treaty. In some situations, AI systems will be expected to display a notice informing users that they’re interacting with an algorithm and not a human.

“In order to stand the test of time, the Framework Convention does not regulate technology and is essentially technology-neutral,” the Council of Europe stated today.

Work on the treaty began in 2019. The effort included the participation of more than 50 countries as well as dozens of experts from civil society, academia and industry. The treaty is expected to go into effect three to four months after it’s ratified by at least five signatories. 

Image: Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU