UPDATED 21:15 EDT / NOVEMBER 10 2016

NEWS

Google’s tiny Project Soli radar can now identify nearby objects

Google Inc. last year revealed its miniature Project Soli radar devices, which are so small that they can fit into mobile devices like phones and smartwatches to power “touchless interactions,” such as gesture controls. Now, they’re getting smarter.

Researchers from The University of St Andrews in Scotland have managed to use Project Soli’s radars not only to sense nearby objects, but also to recognize what those objects are. The researchers have named the new tool Radar Categorization for Input & Interaction, or RadarCat for short.

To create it, the researchers used the same principles behind computer vision, which uses artificial intelligence and machine learning to understand images. Computer vision has been used by companies such as Facebook Inc. and Google itself to describe what is occurring in a picture, including what sorts of objects are in the image and what actions are being done. For example, Google’s computer vision agent has been able to come up with very specific descriptions such as “a dog sitting on a beach next to a dog.”

RadarCat does not appear to be quite a smart as Google’s computer vision agent yet, but it has been trained using similar machine learning methods, which means that it will continue to get smarter over time. Currently, RadarCat can recognize a wide range of simple stationary objects such as fruits, office supplies and so on, but it can also understand more complicated concepts, such as the difference between an empty glass and a glass of water, as well as the difference between the front and back of a phone.

Because it relies on radar rather than images, RadarCat has a few capabilities that computer vision programs do not. For example, lighting has no effect on RadarCat’s ability to detect objects. More importantly, RadarCat can differentiate between objects that are visually identical but are actually made of different materials.

The researchers noted that RadarCat has a number of potential use cases, such as powering an “object dictionary” that can offer useful information about the objects placed on it, including nutritional information for foods or hardware specifications for electronics devices. RadarCat could also be used by the visually impaired to differentiate between objects that feel similar, such as in the picture below.

RadarCat bleach

Yes, the researchers from St Andrews really made that picture for a video demonstration of RadarCat. You can watch the full video showcasing RadarCat in action below:

Image courtesy of Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU