UPDATED 15:25 EDT / MAY 09 2018

EMERGING TECH

The big takeaway at I/O: Google updates its mobile AI roadmap

Artificial intelligence was the dominant theme of Google’s latest developer conference in Mountain View, just as it was at Microsoft Corp.’s equivalent shindig happening this same week in Seattle.

In late March, Google LLC held a very technical AI developer summit where it made many important announcements in the evolution of its TensorFlow framework for AI modeling, training and deployment. This latest event, its larger annual Google I/O, took more of an application focus, with specific emphasis on AI-enabled features in Google’s mobile product portfolio.

As discussed by Google Chief Executive Sundar Pichai in his Day One keynote Tuesday, the most significant “AI-first” announcements at Google I/O were around several key mobile technologies, tools and applications that will be released later this year:

  • Next-generation AI-optimized chipsets: Google announced version 3 of its Tensor Processing Unit, which is an application-specific processor for AI inferencing. Google was light on details about TPUv3, which will ship later this year, but it stated that the next-generation AI-optimized chips are eight times faster than TPUv2 chips released last year, can handle up to 100 petaflops of machine learning computation, and will be primarily accessed through Google Cloud. Prior TPU generations have been in use internally at Google since 2015 to power AI-based functionality in Google search results, Google Photos, Google Cloud Vision application programming interface calls and other services it provides from its cloud data centers.
  • Next-generation AI-optimized smartphones: Google announced that Android P, the next major version of its mobile OS, is now in beta and that its final release will come this summer. Its most notable new features will use machine learning to automatically learn the user’s habits and predictively adapt settings for battery life, screen brightness and application suggestions. These AI-driven features are designed to enhance user experience while making optimal use of device resources. They complement other experience features such as users being able to manually set time limits on apps to cut down on wasting time and use gesture controls to switch between apps.
  • Next-generation low-code mobile AI development tool: Google announced the beta of its forthcoming ML Kit for Firebase, which supports low-code application development of machine learning apps for Android and iOS mobile platforms. This SDK enables beginning ML developers to build apps with just a few lines of code, sparing them from needing deep expertise in neural-net modeling or optimization. It supports the APIs for TensorFlow LiteGoogle Cloud Vision  and Android Neural Networks, providing prebuilt support for text recognitionface detectionbarcode scanningimage labeling and landmark recognition These APIs can run optimized on-device, enabling ML-driven data processing in real-time and working even when there’s no network connection. They can also run in Google Cloud Platform, enabling development and training of more accurate ML models. APIs also allow experienced ML developers to import mobile-optimized custom models created in TensorFlow Lite, upload them to the Google Cloud for serving and update in the cloud without having to republish the models. Google provides quick-start model examples for iOS and Android on GitHub.
  • Next-generation AI-enabled mobile digital assistant: Google announced that the next version of its digital assistant offering, Google Assistant, will have several AI-powered enhancements. The next version of Assistant, slated for availability later this year on Android and iOS devices, will be able to respond to questions with multiple subjects. It also will be able to continue conversations without having to constantly repeat the trigger phrase “Hey Google.” And, via the new Google Duplex feature built and trained in TensorFlow Extended, it will be able to carry on natural phone conversations, based on its ability to understand complex sentences, fast speech, long remarks and speaker intent.
  • Next-generation AI-enabled mobile-map recommendations: Google announced that its Maps app will include AI-personalized real-time recommendations when the next version is released for Android and iOS later this year. A new “visual positioning system” feature will display an augmented reality view of place names, street names and directions in the user’s smartphone camera view (pictured). Real-time recommendations will be based on an AI-generated “your match” score that computed not only from manual inputs by users but also on Google’s vast database of places to eat, stay, tour and so on.

Compared with its focus on AI-enabled smartphones at Google I/O, it’s surprising how little attention the company devoted to deepening its ML-driven intelligence search, email, collaboration and office productivity solutions. This is in contrast to the comprehensive range of AI enhancements that Microsoft announced this week throughout its vast application portfolio.

Also, it was unclear why Google chose to roll out yet another mobile AI development tool when it already has the established TensorFlow Light SDK, which it recently enhanced; a browser-based AI tool, TensorFlow.js; and TensorFlow for Swift, an AI development framework for mobile iOS apps, which Google recently open-sourced. All of these are for low-code development of AI of mobile platforms.

Check out this recent SiliconANGLE column of mine for more information on Google’s directions in browser-based and mobile AI.

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU