UPDATED 13:30 EDT / MAY 11 2023

POLICY

European legislators vote to impose tighter controls on AI

European lawmakers today voted on amendments to the European Union’s “AI Act” draft legislation aimed at tightening regulations over generative artificial intelligence, including bans on biometric surveillance, predictive policing and expanded definitions of “high-risk AI.”

Two legislative bodies joined to negotiate amended text for the AI Act, the International Market Committee and the Civil Liberties Committee, which will modify the proposal put forward by the European Commission in late April, the legislators said in a press release.

Members of the European Parliament began proposing draft legislation in mid-April after it became clear that AI systems could prove to be a substantial harm to both the personal lives of people and to businesses. With the current amendments, the MEPs said they intend to “ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly.”

To deal with these matters, the MEPs included obligations in the amendments for the providers of so-called foundational models for AI, the underlying technology for generative AI such as OpenAI LP’s GPT-4, to guarantee the protection of the fundamental rights, health and safety of users. Generative AI models are capable of producing content from vast datasets that can contain the information about real people and copyrighted data. These AI models can also simulate human conversations and produce realistic images, which could put people at risk if they’re used incorrectly.

In order to comply with these new rules, the producers of these foundational models would be required to include guardrails to protect users and society against malicious use and provide additional transparency or face fines. These would also include other requirements such as disclosing that the content was generated by an AI, designing the model to prevent it from creating illegal content and publishing summaries of copyrighted data used in its training.

The amendments also greatly expanded categories for prohibited uses of AI depending on the risk that it might create for citizens. The rules that the MEPs considered was a risk-based approach on the use of AI and provided further obligations for providers and users depending on that level of risk for the safety of people and society, including systems that exploit vulnerabilities, use manipulation or classify people based on personal characteristics.

For example, the amendments would specifically ban the use of AI for any sort of real-time or biometric identification systems in public areas or its use to scrape footage indiscriminately to create facial recognition databases. So AI could not be used simply to sweep people on the street and identify them. The only exception would be for law enforcement’s use of post-event identification and only with a warrant. 

The MEPs also called for a ban on predictive policing systems, which means anything that would identify people based on profiling, location or past criminal behavior or provide extra information to target individuals.

Finally, the amendments would ban emotional recognition systems for use by law enforcement, border management, workplaces or educational institutions.

The MEPs expanded the definition of “high-risk AI” to include anything that could harm people’s health, safety, fundamental rights or the environment. In particular, they targeted AI systems that might influence voters in political campaigns and recommender systems used by social media – especially those with more than 45 million users. Recommender systems operate by predicting user preferences in order to market to people on websites and are often used by retailers and advertisers to recommend products, but they’re also used by social media platforms to tailor news delivery and user timelines.

“We are on the verge of putting in place landmark legislation that must resist the challenge of time,” said Brando Benifei, an Italian politician and member of Parliament. “It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level.”

The AI Act legislation in the EU follows increasing regulatory attention across the world as AI technology is swiftly growing in both popularity and power. Regulators in Italy temporarily banned OpenAI from operating its ChatGPT AI chatbot in the country over privacy concerns in April, and both the U.S. and China have begun seeking public comment to craft future laws regarding the technology. 

Photo: Christian Lue/Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU