UPDATED 13:35 EDT / FEBRUARY 17 2023

AI

OpenAI launches initiative to improve ChatGPT after it went rogue

OpenAI LLC is launching an initiative to provide more transparency about ChatGPT, increase the quality of the answers the artificial intelligence model provides and make it customizable for users.

The startup detailed the effort on Thursday. 

ChatGPT powers a conversational chatbot that Microsoft Corp. began rolling out to its Bing search engine this month. In recent days, the chatbot has drawn scrutiny over some of the answers it provided to early users. The initiative OpenAI detailed today is designed to address the concerns raised about its technology. 

“Since our launch of ChatGPT, users have shared outputs that they consider politically biased, offensive, or otherwise objectionable,” the startup wrote in a blog post. “In many cases, we think that the concerns raised have been valid and have uncovered real limitations of our systems which we want to address.”

OpenAI relies on human reviewers to fine-tune the accuracy of the answers that ChatGPT generates. To provide more transparency into the process, the startup has released a part of the guidelines that it provides to reviewers regarding political and controversial topics. The three-page document, which is dated July 2022, includes an overview of OpenAI’s policies concerning such topics as well as information about the internal workflow used to improve ChatGPT responses.

The company is in parallel taking a number of other steps to improve transparency. It will “share aggregated demographic information about our reviewers in a way that doesn’t violate privacy rules and norms, since this is an additional source of potential bias in system outputs,” the startup detailed on Thursday. Furthermore, it plans to provide clearer guidelines to reviewers about how to approach controversial topics.

OpenAI’s newly announced plan to improve ChatGPT also includes several other components.

For one, it’s launching a new research effort focused on ensuring the AI model’s default settings meet quality expectations. The company detailed that there are currently cases where ChatGPT “refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should.” It will take steps to reduce such incidents as well as lower the risk of other errors, such as nonsensical answers.

In a parallel effort to improve ChatGPT, OpenAI engineers are developing features that will enable users to customize the chatbot. The goal, the company stated today, is “allowing system outputs that other people (ourselves included) may strongly disagree with.” Another priority is to make the process of customizing ChatGPT simple for consumers. 

Lastly, OpenAI plans to collect public input on how the default settings of ChatGPT can be improved and what restrictions should be placed on the output the chatbot generates. As part of the effort, it has already begun gathering feedback from educators about the impact of ChatGPT in the classroom.

“We are in the early stages of piloting efforts to solicit public input on topics like system behavior, disclosure mechanisms (such as watermarking), and our deployment policies more broadly,” OpenAI detailed. “We are also exploring partnerships with external organizations to conduct third-party audits of our safety and policy efforts.”

Image: OpenAI

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU