UPDATED 12:35 EDT / OCTOBER 30 2023

AI

Biden signs executive order directing artificial intelligence companies to develop safer AI

President Biden signed an executive order today establishing new guidelines for the development of artificial intelligence, including rules for the industry, security standards, consumer protections and federal oversight.

Ever since the OpenAI LP popularized generative AI with its ChatGPT chatbot almost a year ago, its power and capabilities have wowed the world at large with what it can do. These tools can hold humanlike conversations at length, produce poetry and mimic other people’s personal styles, as well as write essays and summarize the news. It has only been a year since the public release of ChatGPT and generative AI capabilities have since grown to include image production, recognition and voice generation.

The executive order builds on previous efforts by the White House to shape the safe development of AI and to contain its potentially dangerous effects. For example, the ability to produce human-quality conversation means that AI has the potential to produce fake news, it can also be used to create high-quality voice clones and even deepfake images that that are almost indistinguishable from real pictures. In July, the White House sought voluntary commitments with major AI companies including Meta Platforms Inc., Amazon.com Inc., OpenAI and Google LLC to produce safe and trustworthy AI.

Under the new guidelines, the government will approach privacy and safety in the development of AI from multiple angles. In terms of cybersecurity and safety, in accordance with the Defense Protection Act the order will require companies developing foundation models that pose risks to national security or public health will need to notify the federal government and share their security audits and before making their models public.

To promote this, the order will have standards set by various government bodies including the National Institute of Standards and Technology to provide rules for safety and testing. The departments of Energy and Homeland Security will address potential AI threats to infrastructure and cybersecurity before the release of AI models that affect infrastructure, biotechnology or nuclear.

“President Biden’s executive order on AI signifies a watershed moment for national security and the international AI arena,” Oz Alashe, chief executive of the risk management platform CybSafe Ltd. told SiliconANGLE. “The decision to mandate stringent evaluations of cutting-edge AI technologies before their use by federal agencies highlights both the transformative promise and lurking dangers of AI systems. Generative AI, with its power to craft realistic narratives, can be especially dangerous when given sensitive and private information from companies.”

In particular, generative AI has become a breeding ground for new types of cyberthreats that include malware, new ways to steal passwords, tricking people into thinking they’re talking to another human being, and more nefarious ways to spread disinformation. As a result, the technology is putting the advantage in the favor of attackers and making it more difficult for defenders to detect AI activity while efforts to educate users are ongoing.

“He was as impressed and alarmed as anyone,” Deputy White House chief of staff Bruce Reed said in an interview with the Associated Press said about Biden’s reaction to generative AI. “He saw fake AI images of himself, of his dog. He saw how it can make bad poetry. And he’s seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation.”

The order will also focus on rules that allow AI systems use training data but still preserve citizen’s privacy. AI systems require vast amounts of data from numerous sources in order to produce their lifelike conversation capability, to create vivid images and to power their systems.

However, they can also scoop up private information about individuals in the process. According to the order, they must align properly to protect American’s data from being used without their permission.

And the order has limits, noted John Hernandez, president and general manager of Quest Software Inc., because it mainly regulates those who abide by the rules. “Threat actors, however, operate swiftly and are already harnessing AI for their nefarious purposes — and AI alone can’t rectify lax security hygiene,” he said. “Furthermore, AI can’t fully close the talent gap, although it can reduce the time the existing cyber workforce spends hunting down and remediating vulnerabilities. Organizations must continue to prioritize fundamental security measures, such as proper identity controls, threat detection and swift response mechanisms, as part of their overall strategy.”

Furthermore, the order seeks to advance the responsible use of AI, because it is possible that these systems can be highly discriminatory and lead to bias and abuses of justice. The Biden-Harris administration has already published an outline for what it calls a blueprint for an AI Bill of Rights and an executive order directing agencies to combat discrimination in algorithms, seeking to minimize racial discrimination. All too often AI systems can incorporate discriminatory biases into themselves and amplify already existing political, racial and class issues.

“Guardrails must be implemented now to ensure that this emerging technology centers equity at every step of development and implementation,” Damon Hewitt, president and executive director of the Lawyers’ Committee for Civil Rights Under Law, told SiliconANGLE. “This executive order is a critical step to help guard against algorithmic bias and discrimination. It can be the beginning of a pathway to a future where AI empowers instead of oppresses.”

In particular, there need to be more rules around what law enforcement agencies can do with AI, said Caitlin Seeley George, campaigns and managing director at Fight for the Future. “As written, the primary action required by the Executive Order regarding law enforcement use of racially biased and actively harmful AI is for agencies to produce reports,” she said. “Reports are miles away from the specific, strong regulatory directives that would bring accountability to this shadow market of harmful tech that law enforcement increasingly relies upon.”

Moreover, she said, “there are high-impact uses where AI decision making should not be allowed at all, including for hiring and/or firing in the workplace; law enforcement suspect identification, parole, probation, sentencing, and pretrial release and detention; and military actions.”

As a result, the order will call upon the government to address discrimination through training, technical assistance and coordination with the Department of Justice and Federal civil rights offices to investigate and prosecute civil rights violations by AI. It also calls upon the justice system to use AI fairly in sentencing, parole and probation or with surveillance, crime forecasting, forensic analysis and predictive policing.

The U.S. Congress is in the early days of debating how to regulate generative AI. In comparison, The European Union’s “AI Act,” draft law is much further along. It also focuses on safety, privacy and civil rights but it outright bans the use of AI in so-called predictive policing, where artificial intelligence is used to predict potential criminal activity.

Photo: White House

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU