UPDATED 20:13 EDT / APRIL 05 2023

AI

Amid ‘hallucinations,’ OpenAI details how it’s striving to improve the accuracy and safety of its AI systems

Just hours after U.S. President Joe Biden called on artificial intelligence software developers to take responsibility for the safety of their products, ChatGPT creator OpenAI LP has gone public on the measures it takes to minimize the dangers its systems might pose.

In a blog post today, OpenAI said it recognized the potential risks associated with AI and maintained that it’s committed to building safety into its models at multiple levels.

OpenAI’s safety measures include conducting rigorous testing of any new system prior to its release, engaging with experts for feedback, and tinkering with the model to improve its behavior, using techniques such as reinforcement learning with human feedback. It noted that it spent more than six months testing and refining its latest large language model, GPT-4, before releasing it publicly last month.

Even so, OpenAI recognizes that testing in a lab setting can only go so far, since it’s not possible to predict all of the ways in which people might decide to use — and also abuse — its systems. Because of this, it said, all new AI systems are released to the public cautiously and gradually to a steadily broadening group of users, with continuous improvements and refinements implemented based on feedback from real world users.

“We make our most capable models available through our own services and through an API so developers can build this technology directly into their apps,” the company explained. “This allows us to monitor for and take action on misuse, and continually build mitigations that respond to the real ways people misuse our systems — not just theories about what misuse might look like.”

Strict safety measures are implemented to protect children from OpenAI’s systems. As a matter of course, it requires that people must be aged 18 or over, or otherwise be at least 13 and have parental approval, to use its AI systems.

At the same time, blocks have been implemented to prevent its systems from generating hateful, harassing, violent or adult content. These are being improved continuously, and GPT-4 is said to be 82% less likely to respond to requests for disallowed content than its previous model, GPT-3.5. If someone tries to upload child safety abuse material to one of its image tools, that content will immediately be blocked and reported to the National Center for Missing and Exploited Children.

Factual correctness is another area that OpenAI is striving to improve, because one of the major problems with AI is hallucination, where a system simply fabricates its response when it cannot find an accurate answer. AI hallucination can cause serious problems, with one recent example being the law professor who was falsely accused by ChatGPT of sexual harassment of one of his students. ChatGPT cited a 2018 article in The Washington Post as the source of this claim — but the problem was that no such article existed, and the professor in question had never faced such accusations.

Obviously, there’s an urgent need to prevent these kinds of mistakes and OpenAI said it’s working to improve this in ChatGPT by leveraging user feedback on responses that have been flagged as incorrect. As a result, GPT-4 is 40% more likely to generate factual answers compared to GPT-3.5, it said.

OpenAI admits that some of the training data used by its systems contains personal information that is publicly available on the web. However, it stressed that its goal is for its systems to learn about the world rather than private individuals. To that end, its team attempts to remove personal information from training datasets whenever feasible. At the same time, it has fine-tuned its models to reject any request made for the personal information of private individuals. OpenAI’s models will also respond positively to any request from an individual to delete personal information from its systems.

“These steps minimize the possibility that our models might generate responses that include the personal information of private individuals,” the company explained.

Charles King of Pund-IT Inc. told SiliconANGLE that it’s good to see OpenAI approaching the issue of AI safety in such a proactive way. “It appears to be in a strong position to press its point of view,” he said. “I hope Microsoft, Google and other players will follow a similar path.”

OpenAI’s disclosure on AI safety is timely, as there have been public calls for the industry to pause the development of advanced AI systems. Last month, more than 1,000 people, including various prominent tech executives, researchers and authors, signed an open letter urging companies like OpenAI to halt their development work and instead focus on defining safety protocols for the industry to ensure that AI systems are “safe beyond a reasonable doubt.” OpenAI has rejected the idea of a pause, and today’s disclosure on its approach to AI safety is another indication that it will continue to press ahead.

King told SiliconANGLE that it’s unclear what benefit could be gained from temporarily pausing AI development work, unless countries such as China, Russia and North Korea will also agree verifiably to hit the pause button too. “If they agree, that’s great, but if not, then slowing AI development might put the U.S. and its allies into an also-ran position that would be difficult to make up.”

So for now at least, the work will continue, though AI safety will remain an ongoing concern. OpenAI said it will become increasingly cautious as it builds and deploys more capable models in future. The good news is it believes that more advanced and sophisticated models deployed in future will be even safer than its existing systems, as they will be better at following user’s instructions and easier to control.

“It’s good to see OpenAI laying down what basically amounts to its ethical principles,” said Holger Mueller of Constellation Research Inc. “But it remains to be seen if this kind of self-governance will suffice to prevent government regulation.”

OpenAI may be willing to accommodate regulations, though, and even help lawmakers to design them. Concluding its blog post, it called upon policymakers and AI providers to ensure that the development and deployment of AI systems is governed effectively at a global scale. More dialogue will be required to do this, and it said it’s keen to participate.

“Addressing safety issues also requires extensive debate, experimentation, and engagement, including on the bounds of AI system behavior,” OpenAI said. “We have and will continue to foster collaboration and open dialogue among stakeholders to create a safe AI ecosystem.”

Image: OpenAI

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU