UPDATED 12:20 EDT / JUNE 01 2023

AI

OpenAI takes on AI ‘hallucinations’ with new training techniques

OpenAI LP is performing research into dealing with artificial intelligence “hallucinations” using new training methods that will help reduce critical mistakes.

Generative AI models have become increasingly popular since the debut of chatbot ChatGPT by OpenAI in late 2022, which is capable of human-like conversation. Since the more recent unveiling of the more advanced GPT-4 model in March, generative AI large language models have been integrated into a multitude of software applications, including Microsoft Corp.’s Bing search.

Chatbots are capable of amazing feats, such as answering questions, providing research assistance, writing poetry and even computer code. However, they’ve also proved to be problematic in that they can sometimes make serious mistakes and completely make things up and present them as facts on the fly.

“In recent years, large language models have greatly improved in their ability to perform complex multi-step reasoning,” the OpenAI researchers said in an announcement Wednesday. “However, even state-of-the-art models still produce logical mistakes, often called hallucinations.”

These problems have plagued AI chatbots since their debut including Google LLC’s Bard, which made a factual mistake about the James Webb Telescope during its public demo. And more recently, a lawyer using ChatGPT may be facing sanctions after the AI made up citations to nonexistent legal cases.

To handle the problem OpenAI researchers said that they intend to detect hallucinations by training the model rewarding it for desirable results and discouraging undesirable results. They intend to do this for each step of the reasoning process instead of just the final conclusion in what is known as “process supervision,” as opposed to “outcome supervision.”

The objective is to build a transparent “chain-of-thought” with feedback on each step that builds on each step of work and thus leads to a better outcome.

The researchers also claimed that delving into the process itself had multiple advantages over simply rewarding the model for the outcome itself because it would produce a better “aligned” AI model. That’s because having a person supervise each step, essentially prevents the ends from justifying the means. With an outcome-supervised reward model, the AI could come to a correct answer but still have an improper logical chain to get there, which could lead to a greater number of errors in other processes.

Using process supervision, the researchers commented that it “directly rewards the model for following an aligned chain-of-thought, since each step in the process receives precise supervision,” thus it produces interpretable and transparent reasoning following human-approved processes.

To prove their point, OpenAI researchers compared the two different reward training model methods using a mathematics problem-solving dataset as a testbed. They then set up process-supervised and outcome-supervised reward models and generated solutions for each problem and picked the solutions that ranked the highest. In the end, the team said that the process-supervised reward system performed better across the board.

The downside of this model training method, however, is that it may lead to reduced performance for training AI systems. That could discourage adoption because it would make it slower to train models than competitors, but it would also make them a lot safer to use.

In order to support future research into this subject, the researchers released their research dataset of 80,000 human feedback labels used to train the best reward model. As a result, other researchers can quickly pick up where they started and continue the work.

Image: OpenAI

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU