

Google Inc. last year revealed its miniature Project Soli radar devices, which are so small that they can fit into mobile devices like phones and smartwatches to power “touchless interactions,” such as gesture controls. Now, they’re getting smarter.
Researchers from The University of St Andrews in Scotland have managed to use Project Soli’s radars not only to sense nearby objects, but also to recognize what those objects are. The researchers have named the new tool Radar Categorization for Input & Interaction, or RadarCat for short.
To create it, the researchers used the same principles behind computer vision, which uses artificial intelligence and machine learning to understand images. Computer vision has been used by companies such as Facebook Inc. and Google itself to describe what is occurring in a picture, including what sorts of objects are in the image and what actions are being done. For example, Google’s computer vision agent has been able to come up with very specific descriptions such as “a dog sitting on a beach next to a dog.”
RadarCat does not appear to be quite a smart as Google’s computer vision agent yet, but it has been trained using similar machine learning methods, which means that it will continue to get smarter over time. Currently, RadarCat can recognize a wide range of simple stationary objects such as fruits, office supplies and so on, but it can also understand more complicated concepts, such as the difference between an empty glass and a glass of water, as well as the difference between the front and back of a phone.
Because it relies on radar rather than images, RadarCat has a few capabilities that computer vision programs do not. For example, lighting has no effect on RadarCat’s ability to detect objects. More importantly, RadarCat can differentiate between objects that are visually identical but are actually made of different materials.
The researchers noted that RadarCat has a number of potential use cases, such as powering an “object dictionary” that can offer useful information about the objects placed on it, including nutritional information for foods or hardware specifications for electronics devices. RadarCat could also be used by the visually impaired to differentiate between objects that feel similar, such as in the picture below.
Yes, the researchers from St Andrews really made that picture for a video demonstration of RadarCat. You can watch the full video showcasing RadarCat in action below:
Support our open free content by sharing and engaging with our content and community.
Where Technology Leaders Connect, Share Intelligence & Create Opportunities
SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.