UPDATED 12:50 EDT / MAY 04 2023

AI

Microsoft supercharges Bing Chat and Edge with new AI-powered features

With the global phenomenon of generative artificial intelligence sweeping the industry, Microsoft Corp. today announced that it’s bringing new features to its Bing Chat AI-powered search designed to provide a richer search experience for users.

Bing’s AI search has also been opened up to more people by moving it from a limited preview to open by eliminating the waitlist. People can now join in using Bing Chat directly using the web and the Edge browser directly by logging into their Microsoft accounts.

Released in February, Bing Chat is powered by OpenAI LP’s GPT-4 large language model, which makes it capable of understanding natural language inputs from users in a multitude of languages and responding conversationally. This makes it perfect for text searches and allows users to just talk to the chatbot about what they’re looking for including web citations.

As part of today’s upgrades, Microsoft is breaking away from just text and adding images to the chatbot’s responses.

“We know from research that the human brain processes visual information about 60,000 times faster than text, making visual tools a critical way people search, create and gain understanding,” said Yusuf Mehdi, corporate vice president and consumer chief marketing officer at Microsoft. “Bing has always been known for its visual experiences including features like Knowledge Cards and visual search.”

Users merely need to ask about something that could trigger a graphic or a chart such as “Tell me about Mount Fuji,” and the chatbot will provide not just information about the mountain in Japan but an image to accompany.

Microsoft recently announced the integration of the Bing Image Creator tool into Bing Chat that uses OpenAI DALL-E art generator AI. Using this tool, users can ask the chatbot to create images for them using simple prompts such as “Paint me a picture of a dog playing cards,” or “Show me a picture of a sunset over a lake made of cotton candy.”

With today’s updates, the image generator has been expanded to include all of the languages offered by Bing, which is over 100 languages, meaning that users can create images in their native language.

Microsoft also said that it would soon be adding multi-modal capabilities to its search, including allowing users to upload images to the chat and use those as part of their search. A capability added with GPT-4 allows the AI to grab context from images and look for related content to respond in text.

For example, a user could upload or link an image of a zebra and ask, “Tell me about this animal?” and the Bing Chat would then answer questions about zebras. The same could be done with other images such as pictures of places or other things and pull out context for the search engine to answer questions.

Productivity updates coming to Bing Chat and Edge

Up until now, Bing Chat users have not been able to store their conversations with the AI and that’s changing soon, according to Microsoft with chat histories. This is a feature that has been available for AI conversations with ChatGPT since its launch.

“Starting shortly, you’ll be able to pick up where you left off and return to previous chats in Bing chat with chat history,” Mehdi said. “And when you want to dig into something deeper and open a Bing chat result, your chat will move to your Edge sidebar, so you can keep your chat on hand while you browse.”

Mehdi added that Microsoft is also exploring allowing new chats to build on the context of other conversations. Starting soon, users will also be able to export and share chat histories into social media – or for transitioning into other tools such as an editor like Microsoft Word.

Soon Microsoft will be bringing Edge better AI summarization capabilities for lengthy documents, including PDFs and long websites, which will make it easier for users to understand information.

Edge is also getting actions, coming in the next few weeks, which take certain prompts from the user and turn them into automated tasks in the browser. For example, if a user wants to watch a particular movie they can type out, “I want to watch the new Avengers movie,” and Microsoft Edge will discover and show options in the chat sidebar and then begin playing the movie from where it’s available – if the user has the streaming capability. Other actions can include opening up settings such as, “I want to change my cookie settings,” or “I want to see my bookmarks,” and actions will open the correct menu.

Currently the AI powered actions don’t do much more than reduce the number of steps users need to take in order to get things done and the integration is still in its infancy, but it’s clear that Microsoft is planning something bigger.

Edge mobile is also getting updates that will allow Bing Chat to understand the context of the page the user is currently looking at. Users will be able to type questions about the page and receive answers about what’s in the mobile tab.

Image: Microsoft

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU