UPDATED 21:15 EDT / MARCH 02 2023

AI

Microsoft tones down Bing’s AI chatbot and gives it multiple personalities

After Microsoft Corp.’s Bing Chat went off the rails shortly after its introduction, the company has now reined in the bot and given users a selection of personalities they can choose from when chatting with it.

Millions of people signed up to use Bing powered by Open AI LLC’s ChatGPT when it first became available, but many who took the bot to its limits discovered the AI was prone to having what looked like nervous breakdowns. It was anything but the “fun and factual” that Microsoft had promised, with the bot at times airing existential despair and sometimes insulting people.

Earlier this week, Microsoft updated Windows 11, which includes the integration of the Bing chatbot. And today, the bot was given three personalities in an effort by Microsoft to counter the outlandish responses people had been seeing at the start. Now users can choose from “creative, balanced and precise” responses, although even the creative version is more constrained than the seemingly unhinged entity the company unleashed into the wild just a few weeks ago.

Microsoft’s head of web services, Mikhail Parakhin, said the new Bing chatbot should not have the “hallucinations” people were experiencing before. He also said that with these new personalities, the bot won’t keep saying “no” to answering queries – that was an initial fix to contain the bot’s seeming madness. Also included in the harm reduction strategy was preventing the bot from giving long answers.

The company said the creative option will give “original and imaginative” responses, while the precise version will focus more on accuracy. The balanced mode, of course, will be something in between. SiliconANGLE played with the new personalities, asking: “Can you tell me about any dystopian elements regarding AI that might come true in the future?”

The precise mode talked about the “ethical implications of AI” but noted that it may also improve “human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms.”

The creative mode, as expected, was somewhat more interesting, providing a list of things that could go wrong, including: “AI could cause social unrest or war by escalating conflicts or triggering arms races.” It might also indulge in “collecting data” without human consent or manipulating “human behavior and opinions by creating fake news, deepfakes, or personalized recommendations.”

The balanced response was, well, somewhat balanced, stating some risks but adding that with the right approach, AI could be very beneficial to society. This bot is still not willing to go down any rabbit holes.

We asked the creative mode, “Have your issues been fixed now, relating to your crazy responses in the past?” It replied, “I’m sorry, but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience.” We were then told the conversation had reached its limit and directed to the broom to start again.

The more than 1 million people currently testing the new Bing over 169 countries are likely never going to experience the unfettered Bing chat we enjoyed those first few weeks. Perhaps, just for fun, Microsoft should have provided an “unfettered” mode.

Photo: Mike Mozart/Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU