UPDATED 20:10 EDT / JANUARY 10 2024

AI

US lawmakers push for regulation of AI companies supplying government agencies

A bipartisan group of lawmakers today took the wraps off of a new piece of legislation that would require federal agencies and the artificial intelligence service providers they use to adopt guidelines to manage the risks associated with the technology.

The proposed legislation, which is sponsored by Democrats Ted Lieu and Don Beyer alongside Republicans Zach Nunn and Marcus Molinaro, is fairly modest in its scope, but that may increase its chance of being passed into law by U.S. Congress. Last November, a Senate version of the bill was proposed by Republican Jerry Moran and Democrat Mark Warner.

If the bill is approved, federal agencies that make use of third-party AI services would be required to adopt guidelines announced last year by the U.S. Commerce Department, Reuters reported. In addition, the bill would also require that the Commerce Department develops more specific standards for companies that supply AI to the U.S. government. Moreover, it would call on the Federal Procurement Policy chief to create rules that require AI suppliers to “provide appropriate access to data, models and parameters” to enable federal agencies to test and evaluate their services.

The emergence of generative AI models, which can create original text, images, videos and code in response to human prompts, has caused tremendous excitement thanks to the hundreds of possible applications it has. However, it has also led to fears that some jobs could become obsolete and that the technology could be abused to manipulate elections and more. There are also fears that AI could be used by bad actors to gain access to critical infrastructure and computer systems.

Those fears have pushed U.S. lawmakers to take steps toward regulating AI technology, but until now few concrete steps have been taken. The most significant so far came in October, when President Joe Biden signed an executive order that aims to regulate AI development by requiring developers to share information on the safety of their most advanced systems.

Europe has made more substantial progress in its efforts to regulate AI. In June, the European Union announced the AI Act that bans certain AI systems, such as predictive policing and biometric surveillance. The legislation also classified some other AI systems as “high risk,” based on the perceived danger they pose to human health, safety rights and elections. Such systems are now subject to specific guardrails designed to ensure their safe development and implementation.

Aside from the AI Act, lawmakers from France, Germany and Italy separately signed an agreement on AI regulation that supports “mandatory self-regulation through codes of conduct” in the development of foundational AI models.

In November, the U.K.’s National Cyber Security Center, along with the cyber agencies of several other governments, and a number of AI companies, released their Guidelines for Secure AI System Development. Those guidelines are broken down into four distinct areas that aim to ensure the secure design, secure development, secure deployment and secure operation and maintenance of AI models.

Image: Freepik

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU