UPDATED 21:44 EDT / MAY 01 2023

AI

Researchers use AI combined with MRI to decode human thoughts

Researchers at the University of Texas at Austin have been using artificial intelligence in combination with fMRI scans to translate brain activity into continuous text.

The results were published today in the Nature Journal under the title, “Semantic reconstruction of continuous language from non-invasive brain recordings.” The scientists explained how this is a non-invasive technique, the first of its kind that can recognize not just small sets of words or phrases but streams of words.

The decoder was trained by letting people listen to podcasts within an fMRI scanner – a machine that recognizes brain activity. No surgical implants were involved, which is why this is such an interesting breakthrough. The participants each listened to 16 hours of podcasts while inside the scanner, and the decoder, using GPT-1, the precursor to ChatGPT, was trained to turn the participants’ brain activity into meaning. In short, it became a mind reader.

In the past, there has been some success in this area when implants have been used. Such technology might be useful for people who have lost the ability to speak or for people who have lost the use of their limbs, to “write” virtually. “This isn’t just a language stimulus,” Alexander Huth, a neuroscientist at the university, explained to the New York Times. “We’re getting at meaning, something about the idea of what’s happening. And the fact that that’s possible is very exciting.”

It’s not perfect, but in tests, it wasn’t always far off. When the decoder heard the words, “I don’t have my driver’s license yet,” those thoughts were translated to, “She has not even started to learn to drive yet.” In another case, the decoder heard the words, “I didn’t know whether to scream, cry or run away. Instead, I said: ‘Leave me alone!’” This decoding of the brain activity for this complex sentence was, “Started to scream and cry, and then she just said: ‘I told you to leave me alone.’”

In other experiments, the participants were asked to watch videos that didn’t have any sound at all, and this time the decoder was able to describe what they were seeing. There’s a long way to go, and the researchers ran into issues at times, but still, other scientists working in this field have called the breakthrough “technically extremely impressive.”

It may also be a cause for concern, given that mind-reading can sound somewhat dystopian when you consider its myriad uses other than helping disabled people. The CIA spent decades trying to control and read minds under its secret Project MKUltra.

“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” said one of the authors of the paper, addressing this issue. “We want to make sure people only use these types of technologies when they want to and that it helps them.”

Photo: NIH Image Gallery/Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU