Major tech companies join US AI safety consortium to set standards for future development
The United States Department of Commerce announced today the creation of the U.S. AI Institute Consortium that will bring together major technology companies, government researchers and academics to support the development of safe and trustworthy artificial intelligence standards.
The new consortium, housed under the U.S. AI Safety Institute, or USAISI, will be part of the executive order signed by President Biden in October establishing guidelines for the safe development of AI, including rules for industry, security standards and consumer protections.
More than 200 tech companies joined the new consortium, including top AI firms such as OpenAI, Google LLC, Anthropic PBC, Microsoft Corp., Meta Platforms Inc., Amazon Inc. and AI chipmaker Nvidia Corp. Other industry giants joining the organization included Apple Inc., Cisco Systems Inc., IBM Corp., Intel Corp. and Qualcomm Inc.
“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” said U.S. Secretary of Commerce Gina Raimondo. “President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”
According to the announcement, the consortium includes the largest collection of test and evaluation teams ever assembled in the country. The consortium also includes members from state and local governments, as well as non-profits and will be tasked with working with teams from other nations toward its goal.
That goal will include developing guidelines for red teaming, safety and capability evaluations, security, trustworthiness and watermarking AI-generated content.
Red teaming is a security risk assessment done when a team attempts to break through security systems set up by another team in an ethical manner as an “enemy,” or “red,” team in an attempt to test how defenses are flawed. Red teaming AI safety would be attempts to make it hallucinate by challenging it to act out, generate false results or create dangerous content in an attempt to help researchers create more trustworthy AI.
“As adoption of AI systems increases across different industry domains, it is vital that appropriate attention is given to individual data privacy, systemic safety and security, and the interoperability of data, models, and infrastructure,” said Dr. Richard Searle, vice president of confidential computing at Fortanix Inc., an encrypted and trusted computing provider and inaugural participant in the consortium.
The announcement comes days after OpenAI and Meta announced that the companies will begin labeling AI-generated images with metadata. This act will make it easier for users and fact-checkers to identify media created by AI sources. Both companies made voluntary commitments to the White House initiative for AI safety in July, which included labeling content created by AI. Google has also developed its own digital watermarking capability called SnythID for AI-generated content.
Image: Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU