UPDATED 13:00 EDT / NOVEMBER 12 2020

AI

Android’s Neural Networks API adds support for PyTorch to enable on-device AI processing

Google LLC’s Android team today added support for a new prototype feature that makes it possible for developers to perform “hardware accelerated inference” on mobile devices using the PyTorch artificial intelligence framework.

The addition of support for PyTorch, first built by Facebook Inc., means that thousands more developers will be able to leverage the Android Neural Network application programming interface’s ability to run computationally intensive AI models on-device, Google said.

In a blog post, product manager Oli Gaymond said the Neural Network API was built by Google to give Android devices a way to perform inference themselves, instead of transmitting data to be processed at a remote server. There are big benefits from processing data on-device, Gaymond said, such as lower latency, more privacy and the ability for certain features to work without internet connectivity.

The most common use cases for on-device processing are tasks such as computer vision and audio enhancement, Gaymond said. One example might be segmenting a user from the background when they make a video call. This kind of task is obviously very sensitive to latency, so it’s better if it can be done using the device’s own hardware rather than at a faraway data center. That’s where the Android Neural Network API comes in.

It already works with Google’s own AI frameworks, TensorFlow and TensorFlow Lite, but adding support for PyTorch Mobile opens it up to the thousands of developers who are more experienced with that framework, Gaymond said.

Facebook has already used a prototype of the Android Neural Network API that supports PyTorch to enable immersive 360-degree backgrounds on Messenger video calls. Gaymond said the results were pretty impressive, with Facebook seeing a two-times speedup and two-times reduction in power, while also offloading some of the processing work from the phone’s central processing unit and freeing it up for other critical tasks.

“Today’s initial release includes support for well-known linear convolutional and multilayer perceptron models on Android 10 and above,” Gaymond said. “Performance testing using the MobileNetV2 model shows up to a 10x speedup compared to single-threaded CPU. As part of the development towards a full stable release, future updates will include support for additional operators and model architectures including Mask R-CNN, a popular object detection and instance segmentation model.”

Image: FunkyFocus/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  
CUBE Alumni Network
C-level and Technical
Domain Experts
15M+ 
theCUBE
Viewers
Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.