UPDATED 11:50 EDT / MARCH 13 2024

AI

European lawmakers pass world’s first major regulation for AI

European Union lawmakers today gave the final approval for the world’s first major framework for regulatory rules that will govern artificial intelligence with the AI Act today.

The European Parliament finalized the vote after reaching an agreement in December between member states on the landmark legislation. The regulatory framework was first proposed in 2021 by the European Commission, the EU’s executive branch, which will aim to govern how the technology is used by diving it into risk categories from “unacceptable,” which would outright ban its use, to high, medium and low.

The vote set today put the regulatory provisions into law with 523 votes in favor, 46 against and 49 not cast.

Under the new law, certain applications of AI will be banned, especially those that threaten the rights of citizens, including its use for biometric scanning and categorization by policing and private organizations. In particular, the law zeroes in on those that look for sensitive characteristics and the untargeted scraping of internet or CCTV footage to create facial recognition databases, partly because those types of systems have major false positive rates. Emotion recognition in workplaces and schools, social scoring, predictive policing and any AI use that manipulates human behavior would also be banned.

Law enforcement may get exempted from the ban in extreme cases such as serious crimes such as kidnapping or terrorism, or for purposes such as looking for a missing person. Those cases would require the oversight of a judge.

“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination and bring transparency,” said Brando Benifei, an Italian politician and member of parliament. “Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected.”

Early drafts of the law emerged when OpenAI’s ChatGPT chatbot was gaining popularity and its impressive human-like conversational capabilities were beginning to wow audiences across the globe along with other generative AI models. At the same time, world governments were beginning to understand how generative AI models were trained, including the need to ingest vast amounts of data from private and public sources, which could include personal and copyrighted information.

The meteoric rise of ChatGPT, Google LLC’s Gemini (formerly Bard), Anthropic PBC’s Claude have created fears of privacy issues and copyright impacts for the AI industry. In 2023, OpenAI’s ChatGPT services were temporarily banned by regulators in Italy over privacy concerns while a probe was started into its practices.

To help address the need to understand what goes into training and deploying generative AI and general-purpose AI models, the AI Act also contains provisions for transparency for complying with copyright and privacy laws. Additionally, companies that build more powerful “high-risk” models – those that are involved in critical infrastructure, education, healthcare, banking, law enforcement or similar – will have additional responsibilities in reporting incidents and complying with model evaluations.

The AI Act also contains provisions that any audio or image-generating AI must clearly label outputs to prevent “deepfakes,” to prevent synthetically manipulated content. The rise of deepfakes has been a potentially insidious problem where AI audio and image generation have been used to create images that are difficult to distinguish from reality that could be used to commit fraud and sway public thought. A deepfake robocall that sounded like President Biden was also used to defraud voters in a U.S. primary election.

“The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies,” said Dragos Tudorache, a politician from Romania and member of parliament. “However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labor markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice.”

The AI Act will become law by May or June after it’s reviewed and receives its final endorsement from EU member states. Individual regulatory provisions within the law will begin to take effect in stages after it becomes law, with countries required to ban prohibited systems six months after it comes into force. The rules for chatbots and privacy will take effect within a year and obligations for high-risk systems will follow one year after that.

Image: Rawpixel/Freepik

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU