UPDATED 13:32 EDT / JUNE 15 2017

EMERGING TECH

Google releases new object recognition algorithms to help make apps smarter

Enabling a mobile app to analyze images is as easy as integrating it with one of the numerous cloud-based object recognition services on the market. But sending files to a remote data center for processing isn’t always efficient because of connectivity constraints and other logistical hurdles.

Now developers have an alternative. Google Inc. on Wednesday open-sourced a family of computer vision algorithms called MobileNets that are specifically designed to run on smartphones and tablets. According to the search giant, the package sidesteps many of the technical obstacles that have historically made it difficult to build object recognition features directly into an app.

The first part of the package involves use of hardware. Computer vision algorithms, like other types of artificial intelligence, typically require a great deal of processing power that can be hard to come by on a mobile device. Google designed each of the 16 models in the MobileNets family with a different performance profile to let developers optimize resource consumption based on their project requirements.

The less fine-grained an app’s object recognition capability needs to be, the more resources can be left for other processes. For added measure, Google has optimized MobileNets to be power efficient in a bid to prevent applications that employ its algorithms from draining the battery too fast.

Once they’ve selected a model, developers can quickly start processing images thanks to the fact that the models in the package come pre-trained. This is a major convenience seeing that getting an artificial intelligence into working order usually requires a great deal of time and effort, not to mention specialized know-how.

Google has trained MobileNets to analyze facial expressions, recognize objects that appear in images and perform several other related tasks. The search giant claims that the package can compete with leading computer vision algorithms despite having been built to run on handsets. It’s designed work with the recently introduced mobile edition of TensorFlow, the company’s popular open-source machine learning engine.

MobileNets should help Google cement its position in the artificial intelligence ecosystem and score more points with mobile developers. But on the flip side, the project could compete with the company’s cloud-based object recognition service, which is a key part of its efforts to make money from the rapid rise of AI.  

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  
CUBE Alumni Network
C-level and Technical
Domain Experts
15M+ 
theCUBE
Viewers
Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.