EU releases sweeping draft legislation for regulating AI
The European Commission, the European Union’s executive branch, today proposed new legislation that would place stringent restrictions on how artificial intelligence can be used, as well as subject companies caught violating the regulation to potentially hefty fines.
“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” stated European Commission Executive Vice President Margrethe Vestager. “By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way.”
Exactly what restrictions apply to what AI technology and under what conditions will depend on the use case. The proposed regulation would outright ban “AI systems considered a clear threat to the safety, livelihoods and rights of people,” the European Commission stated. Vestager said that one example of AI systems the legislation would prohibit are those designed to “use subliminal techniques to cause physical or psychological harm to someone.”
Furthermore, the legislation calls for use of biometric identification systems in public places to be prohibited by principle. But it would carve out “very narrow exceptions that are strictly defined, limited and regulated,” such for certain law enforcement purposes.
The legislation proposes a second set of rules to regulate AI systems that are allowed but considered high-risk. This category includes, among other things, machine learning software used in the context of critical infrastructure and algorithms responsible for processing loan applications. Under the proposal, the European Union could fine suppliers of high-risk AI products up to 6% of their total worldwide annual revenues if they fail to comply with the rules.
Among the requirements such companies would be expected to meet is the obligation to provide regulators with thorough documentation about how their software works. Moreover, they would have to demonstrate that their AI systems were developed “with a proper level of human insight” and using high-quality training data. Such firms would additionally be required to “respect the highest standards of cybersecurity and accuracy,” Vestager said.
A third category of AI systems the draft legislation aims to regulate are “limited risk” services such as customer service bots. Under the proposed rules, developers would be required to inform users that they’re interacting with a machine.
Most types of AI systems in use today, such as automatic spam filters, won’t be affected by the regulation.
“We can only reap the full benefits of AI’s societal and economic potential if we trust we can mitigate the associated risks,” Vestager stated. “To do so, our proposed legal framework doesn’t look at AI technology itself. Instead, it looks at how AI is used, and what for.”
For the proposal legislation to become law, it will have to be approved by the European Council and European Parliament. If the General Data Protection Regulation the bloc implemented a few years ago is any indication, the draft legislation may go through several changes before receiving final approval.
Photo: Unsplash
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU