In public letter, former OpenAI researchers urge increased transparency on AI risks
A group of machine learning researchers today released a public letter urging the tech industry to develop advanced artificial intelligence models in a more transparent manner.
The letter, titled “A Right to Warn about Advanced Artificial Intelligence,” has 13 signatories. The group includes current and former researchers from OpenAI, Alphabet Inc.’s Google DeepMind research group and Anthropic PBC. The letter has been endorsed by Yoshua Bengio, Geoffrey Hinton and Stuart Russell, three prominent computer scientists known for their foundational contributions to machine learning.
The signatories argue that companies such as OpenAI have a considerable amount of data about the potential risks associated with their AI models. Some of this data is not publicly available, the letter states, and there are no regulations that require AI developers to release the information. As a result, the signatories argue that current and former staffers at machine learning companies “are among the few people who can hold them accountable to the public.”
The letter goes on to outline four steps that AI providers should take to ensure their employees can share the risks they identify with the public.
The signatories’ first recommendation is that companies “support a culture of open criticism.” According to the letter, an AI provider building cutting-edge models should “allow current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise.”
The signatories argue companies should also create a process that allows staffers to share concerns about AI risks anonymously. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the researchers backing the initiative explained in the letter.
The two other best practices recommended by the signatories focus on protecting staffers who flag AI risks from retaliation. The letter states that companies’ transparency efforts should include, among other commitments, a pledge not to “retaliate for risk-related criticism by hindering any vested economic benefit.”
The release of the letter comes a few weeks after word emerged that OpenAI had included a nondisparagement provision in employees’ offboarding agreements. Under the provision, staffers who criticize the company or don’t accept the clause can lose all their vested equity. A few days after the practice came to light, OpenAI announced that it will not enforce the clause.
More recently, the company formed an Safety and Security Committee tasked with ensuring its AI research is carried out safely. The panel comprises OpenAI Chief Executive Officer Sam Altman, three board members and five engineering executives. In conjunction with the panel’s formation, the company announced that it recently started training the successor to GPT-4.
Image: Unsplash
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU