UPDATED 16:41 EDT / APRIL 26 2024

AI

CEOs of Microsoft, Nvidia and other tech giants join federal AI advisory board

A group of prominent tech executives will join the Artificial Intelligence Safety and Security Board, a panel tasked with advising the federal government on the use of AI in critical infrastructure.

The Wall Street Journal reported the development today. According to the paper, the panel comprises not only representatives of the tech industry but also academics, civil rights leaders and the chief executives of several critical infrastructure companies. In all, the Artificial Intelligence Safety and Security Board will have nearly two dozen members. 

Microsoft Corp. Chief Executive Satya Nadella, Nvidia Corp. CEO Jensen Huang and OpenAI’s San Altman are among the participants. They will be joined by their counterparts at Advanced Micro Devices Inc., Amazon Web Services Inc., Anthropic PBC, Cisco Systems Inc., Google LLC and IBM Corp.

Secretary of Homeland Security Alejandro Mayorkas is leading the panel. According to the Journal, the Artificial Intelligence Safety and Security Board will advise the Department of Homeland Security on how to safely apply AI in critical infrastructure. The panel’s members will convene every three months starting in May. 

Besides providing advice to the federal government, the panel will also produce AI recommendations for critical infrastructure organizations. The effort is set to focus on companies such as power grid operators, manufacturers and transportation service providers. The panel’s recommendations will reportedly focus on two main topics: ways of applying AI in critical infrastructure and the potential risks posed by the technology.

Multiple cybersecurity companies have observed hacking campaigns that make use of generative AI. In some of the campaigns, hackers are leveraging large language models to generate phishing emails. In other cases, AI is being used to support the development of malware.

The Artificial Intelligence Safety and Security Board was formed through an executive order on AI that President Joe Biden signed last year. The order also called on the federal government to take a number of other steps to address the technology’s risks. The Commerce Department will develop guidance for identifying AI-generated content, while the National Institute of Standards and Technology is working on AI safety standards.

The executive order established new requirements for private companies as well. In particular, tech firms developing advanced AI must now share data about new models’ safety with the government. This data includes the results of so-called red team tests, evaluations that assess neural networks’ safety by simulating malicious prompts.

Several of the AI ecosystem’s largest players have made algorithm safety a focus of their research efforts. OpenAI, for example, in December revealed that it’s developing an automated approach to addressing the risks posed by advanced neural networks. The method involves supervising an advanced AI model’s output using a second, less capable neural network.

Image: Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU