UPDATED 19:36 EDT / DECEMBER 10 2023

POLICY

EU reaches provisional agreement on landmark legislation to regulate AI

The European Union on Friday agreed to a provisional deal on new rules that govern the development and use of artificial intelligence technologies such as ChatGPT.

The new EU AI Act, believed to be the world’s first major regulation governing AI, was reportedly hashed out after days of negotiations between officials from European countries. Among other things, there was some disagreement over how to regulate generative AI models, and the use of biometric identification technologies such as fingerprint scanning and facial recognition systems.

Lawmakers from France, Germany and Italy have opposed directly regulating generative AI models, also known as large language models, and instead believe that the companies developing them should self-regulate based on government-induced codes of conduct. They were reportedly concerned that excessive regulation might stifle innovation in Europe, which is desperately trying to compete with American and Chinese companies in the AI race. France and Germany are both home to some of the most promising generative AI startups, including Mistral AI and DeepL GmbH.

Reuters reported that the EU’s AI Act is the first regulation of its kind that’s specifically focused on AI technology. The law has been some years in the making, dating back to 2021 when the European Commission first proposed creating a common legal framework for AI. According to the New York Times, the act divides AI into various categories based on their perceived level of risk, ranging from “unacceptable” – meaning they should be banned – to “high,” “medium” and “low-risk.”

The EU Commissioner Thierry Breton posted on X, formerly known as Twitter, that the deal was an “historic agreement.” He added that it means “the EU becomes the very first continent to set clear rules for the use of AI. The AI Act is much more than a rulebook – it’s a launchpad for EU startups and researchers to lead the global AI race.”

Regulation of AI has become a prominent topic since the emergence of OpenAI’s ChatGPT late last year. The chatbot’s impressive capabilities, which enable it to engage in humanlike conversations, write original software code and perform other tasks, sparked among technology firms to create similar AI models. Many believe generative AI will have a significant impact in areas such as internet search, writing emails, generating images and drastically improving the productivity of business workers.

The rapid rise of ChatGPT and other LLMs, such as Stable Diffusion, Google LLC’s Bard and Anthropic PBC’s Claude has blindsided legislators, who are concerned about their potential to displace jobs, infringe on privacy and copyright and spread misinformation and hate speech.

According to Breton, the EU’s AI Act will require AI companies to disclose information about how their models work, and evaluate them for “systemic risk.”

In a press release, the EU states that there will be “obligations” for creators of “high-impact” general-purpose AI systems that meet certain benchmarks, including the need to undergo risk assessments, adversarial testing, deliver incident reports and more. The act also mandates a certain level of transparency, insisting that AI system creators provide technical documentation that includes detailed summaries of the data used to train them. This is something that U.S. firms such as Google and OpenAI have steadfastly refused to do.

In addition, the act states that EU citizens should have a way to file complaints about AI systems and receive an explanation on how “high-risk” systems might affect their rights.

The release didn’t provide much detail on what those benchmarks were, nor did it reveal much about how the rules will be enforced. However, it did provide a framework for fines if companies are found to have broken the rules. These will vary based on the size of the company in question and the nature of its violation, but can range from 35 million euros ($37.6 million) to 7% of their global revenue, or from 7.5 million euros to 1.5% of their global revenue.

Certain applications and activities have also been banned. For instance, it will be illegal to scrape facial images from CCTV cameras, categorize individuals based on sensitive characteristics such as their race, sexual orientation and political belief.

In addition, emotion recognition systems will be banned at work and in schools, and the creation of “social scoring systems,” similar to China’s social credit, will be prohibited. AI systems that could “manipulate human behavior to circumvent their free will” and “exploit the vulnerabilities of people” will also be banned. These loosel -defined rules will likely enable lawmakers to clamp down on people using AI systems to try to manipulate government elections, analysts believe.

There are some exemptions from the rules. For instance, law enforcement agencies will still be allowed to use AI-powered biometric technologies to search for evidence in recordings or in real time.

The implementation of regulations specifically designed to govern AI is an important step, and the EU is determined to become the first continent to establish rules on its development and use, said Holger Mueller of Constellation Research Inc. He believes the rules will help to simplify things for technology companies working with AI, especially startups that lack the funds to build large compliance teams. “The EU appears to be on track to regulate data usage too, specifically with regards to personally identifiable information and biometric data, which will also make things easier,” the analyst said. “But it’s too early to tell if the EU has struck the right balance to ensure the safe adoption of AI without stifling innovation.”

Although lawmakers have agreed on the deal in principle, a number of details still need to be finalized. Even when that happens, the act is unlikely to go into force before 2025.

Forrester Research Inc. analyst Enza Iannopollp told Reuters that the EU’s AI Act is “good news” for both businesses and society, though he believes it will inevitably attract some criticism. “For businesses, it starts providing companies with a solid framework for the assessment and mitigation of risks, that – if unchecked – could hurt customers and curtail businesses’ ability to benefit from their investments in the technology,” the analyst said. “And for society, it helps protect people from potential, detrimental outcomes.”

Image: Rawpixel/Freepik

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU