UPDATED 14:40 EDT / MARCH 28 2024

AI

Biden administration unveils new AI safeguard rules for federal use

The Biden administration today announced new policies for the government’s use of artificial intelligence, calling upon federal agencies to adopt “concrete safeguards” by Dec. 1 to protect the rights and safety of Americans.

The new directives, outlined in a fact sheet, state that the White House Office of Management and Budget is issuing three new policies that will improve transparency and require that government agencies’ use of AI does not endanger the safety of U.S. citizens. In addition, all federal agencies must designate a chief AI officer.

“I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its full benefit,” Vice President Kamala Harris told reporters on a press call.

The new guidelines for AI safeguards follow an executive order signed by President Joe Biden in October that called for developers of AI systems to bolster security standards, transparency, consumer protections and federal oversight. This included additional safeguards for AI models that could pose risks to national safety or public welfare, requiring developers to share security audits with the federal government before making their models public.

In order to address these new safeguards, federal agencies using AI will need to apply a range of mandatory actions to assess, test and monitor the use of AI if it impacts the public, the OMB said. These safeguards apply to numerous situations, including healthcare, housing and education, as they apply to critical issues such as rights and safety.

Examples include the use of AI in airport screening, where travelers will have the ability to opt out of the use of Transportation Security Administration facial screening without extra delays. If the Veterans Administration chooses to use AI in VA hospitals to help diagnose patients, a human must be in the loop to oversee the process to verify and authenticate results to make certain that mistakes are not overlooked. When AI is used in fraud detection, human oversight must remain in place to make certain that automated systems do not create bureaucratic nightmares and a human is able to correct potential harms.

The OMB added that any agency unable to apply these safeguards must stop using AI by the deadline until the policies can be properly implemented.

 “The introduction of binding AI requirements for U.S. federal agencies is a significant step that aligns with the global movement toward increased AI governance and regulation,” Anita Schjøll Abildgaard, co-founder and chief executive of Iris.ai, told SiliconANGLE. “This comes on the heels of the EU AI Act, which has set the stage for a comprehensive risk-based regulatory framework for artificial intelligence systems. As nations grapple with the societal implications of AI, coordinated efforts to establish guardrails are crucial for fostering innovation while upholding trust, core values and human rights.”

The European Union similarly tackled the rise of AI with its own AI Act, the world’s first major regulatory framework to govern the use of AI passed earlier this month. The AI Act covers much of the same ground that Biden’s October executive order did, including targeting AI discrimination and transparency while focusing on banning “high-risk” applications of AI, especially those that threaten the rights and privacy of citizens and put it into force of law.

On top of safeguarding safety and rights, the new OMB policies also call upon AI use by federal agencies to be more transparent. The new guidelines have expanded the annual inventories of AI use cases by federal organizations, including those uses that impact rights and safety – this includes how agencies are working to address those risks. Government-owned AI code, models and data will also be released when such information does not pose a risk to the public or government operations.

Certain sensitive use cases for AI use, such as those related to national security, would be exempt from public inventory and data release, the OMB said.

“While I think these regulations are necessary for helping monitor AI and keeping the technology safe for the public, it’s crucial that it isn’t stifling innovation,” Alon Yamin, co-founder and CEO of Copyleaks Ltd. told SiliconANGLE. “Furthermore, with the quick turnaround required to get everything in place, there runs the risk of missteps in terms of putting the right people in place or adopting the technology. I think it will take a little while to iron out fully.”

Finally, the new guidelines will designate chief AI officers whose role is to coordinate the use of AI within their agencies. This will also establish AI Governance Boards, chaired by the deputy secretary or equivalent, with the aim to coordinate and govern the use of AI across agencies. As of today, several federal government agencies have already created AI boards, including the departments of Defense, Veterans Affairs, Housing and Urban Development and State. Affected federal agencies are required to create one by May 27.

“The current regulations are good first steps in determining how to regulate generative AI without stifling its potential. However, there are still no laws within the US to regulate AI (the Biden executive order simply provides guidelines), which shows that there is still more to be done,” Yamin said. “The EU’s AI Act is an excellent example of robust regulation, but even that is still new, and we have to wait and see how it will roll out.”

Photo: The White House

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU