UPDATED 16:34 EDT / MAY 04 2018

EMERGING TECH

How now, smart browser? AI takes up residence on the web client

Artificial intelligence is starting to live everywhere, especially in your browser.

Browser-based AI has several advantages. Running AI in the browser can speed up some AI operations — such as sentiment analysishand gesture detection and style transfer — by executing them directly on the client. It can eliminate the need for background application programming interface requests to cloud-based resources, thereby simplifying and accelerating AI apps’ end-to-end flow.

It can also provide the AI app with direct access to rich data from client-side sensors, such as webcams, microphones, GPS and gyroscopes. It addresses privacy concerns by retaining browser-based AI data in the client. And not least, it brings AI within reach of the vast pool of Web developers who work in JavaScript and other client-side languages, frameworks and tools.

For all those reasons, browser-focused tools for developing AI apps are beginning to proliferate. One of the latest to hit the market is TensorFire, an open-source tool developed by a team of MIT researcher. It joins the growing roster of JavaScript AI development frameworks that I discussed in this recent blog. Google has provided several demos of the technology, one of them a “Rock Paper Scissors” game played with a computer (pictured).

What they have in common is support for AI programming in various browser-side languages and scripts. They all support interactive modeling, training, execution and visualization of machine learning, deep learning and other AI models in the browser. They can all tap into locally installed graphics processing units and other AI-optimized hardware to speed model execution. And many of them provide built-in and pretrained neural-net models to speed development of regression, classification, image recognition and other AI-powered tasks in the browser.

Among leading AI vendors, Google has the most comprehensive tooling for helping developers build ML and DL apps not just for the browser but in a growing range of client apps and devices. In that regard, Google has made several important recent announcements:

  • New browser-based AI framework: Google announced TensorFlow.js at its developer conference in late March. TensorFlow.js an evolution of deeplearn.js, a JavaScript library that Google released last year. It builds on Google’s TensorFlow Playground, an interactive visualization of neural networks written in TypeScript. This new framework supports interactive JavaScript development of client-side AI applications in which models are built and trained entirely or mostly in the browser, with their data remaining there as well. It also allows pretrained AI models to be imported — or tweaked through transfer learning — only for browser-based inferencing. The framework allows developers to import models previously trained offline in Python with Keras or TensorFlow SavedModels and then use them for inferencing or transfer learning in the browser, leveraging WebGL acceleration for client-side GPU acceleration. The TensorFlow.js team is planning to update it to support the back-end Node.js JavaScript development framework.
  • New mobile device-embedded AI framework: Google formally released Swift for TensorFlow in March and just this past week made this open-source ML development framework available on GitHub. The framework supports programming of ML models in a general-purpose, compiled language for iOS, macOS, watchOS, tvOS and Linux. It automatically analyzes Swift code and builds the TensorFlow graph and runtime calls. After the TensorFlow graph is formed, the tool serializes it to an executable encoding that is easy to load at program runtime. As discussed in this design overview, Swift for TensorFlow allows programmers to call Python APIs directly from Swift, so that ML developers can continue using existing data science and other useful tools while using Swift for building TensorFlow apps. It supports immediate evaluation of AI operations without an extra graph-building step, an imperative-programming approach that is referred to as “eager execution” in TensorFlow. And it provides an API wrapper for the language as well as a compiler, interpreter, scripting and language enhancements to boost developer productivity when building AI for embedding in mobile and edge applications.
  • Updates to its mobile computer-vision AI library: The company introduced MobileNetV2, the latest generation of a family of general-purpose, DL-powered computer-vision neural networks embedded in mobile devices. This latest release includes enhancements to its visual recognition algorithms for classification, object detection and semantic segmentation. Google has published benchmarks showing that the V2 algorithms are faster, more efficient and accurate than their predecessors for these tasks. The new version is available as a component of TensorFlow-Slim Image Classification Library, within Google’s Colaboratory tool, in a downloadable Jupyter notebook, as modules on TF-Hub and even as pretrained models on GitHub.
  • Updates to its general-purpose, device-embedded AI framework: It recently updated its established TensorFlow Lite, which provides a lightweight framework with a fast core interpreter for deploying trained ML models on mobile and other edge devices, including those running on Raspberry Pi.

Google’s development of its TensorFlow.js developer ecosystem is starting to pick up steam. Check out this online “playground,” which invites developers to “tinker with a neural network right here in your browser.” Here’s a third-party data-science developer site that steps you through the process of developing JavaScript-based TensorFlow.js apps. And here’s a third-party post that provides converters and utilities for TensorFlow.js developers.

Finally, here’s an excellent discussion of TensorFlow.js from TensorFlow Dev Summit 2018, focusing on the core libraries and high-level APIs to make it easier to develop AI in JavaScript:

Image: TK

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU