UPDATED 16:31 EST / MAY 15 2018

EMERGING TECH

How client-side training is moving from the fringes to the center of AI development

Model training is where artificial intelligence models are readied for production deployment.

Traditionally, machine learning, deep learning and other AI models are trained in clouds, server clusters and other high-performance computing environments. However, Wikibon has recently noticed a surge in AI training environments that operate at the network’s edge. In other words, these environments evaluate AI models’ fitness through hardware and software resources resident in mobile devices, smart sensors, Web browsers and other client-side platforms.

More often than not, client-side training can’t produce AI models that are as accurate in their inferential tasks, such as prediction, classification and the like, as those trained in centralized data environments. But client-side training often has a countervailing advantage: continually updating the AI models in each edge node in keeping with the specific data being sensed by that node and optimizing the specific tasks executed at that location.

In that sense, client-side training can be an accelerator of AI-model learning within distributed edge clouds. As more IoT, mobile and other distributed application environments adopt client-side AI training, I see the following practices come into the mainstream of AI DevOps pipelines:

  • On-device training: Client-side training enables apps to ingest freshly sensed local data and rapidly update the specific AI models persisted in those devices. As this article notes, device-side AI training is already standard in many iOS applications, such as ensuring that Face ID recognizes you consistently, grouping people’s pictures accurately in the Photos app, tuning the iPhone’s predictive keyboard and helping Apple Watch learn your habitual patterns automatically from activity data.
  • Transfer learning for incremental client-side training: A key accelerator for client-side AI training is transfer learning. This involves reusing any relevant training data, feature representations, neural-node architectures, hyperparameters and other properties of a existing models, such as those executed on peer nodes. This would appear to be how Neurala implements client-side AI training combining fast pretraining with incremental on-the-fly learning, so that local models can be incrementally updated in real time “without … the need to keep all training data to add new knowledge.”
  • Browser-based training within JavaScript app development frameworks: Many low-code development environments build apps in JavaScript frameworks and client-side AI modeling and training are coming to this environment as well. As I recently discussed here, browser-based AI frameworks — such as js and TensorFire — are starting to gain traction. These enable AI modeling, training and deployment to take place right in the browser, leveraging JavaScript and other scripting and procedural languages. Many of these frameworks provide built-in and pretrained neural-net models to speed development of regression, classification, image recognition and other AI-powered tasks in the browser. TensorFlow also allows pretrained AI models to be imported — or tweaked through transfer learning — for browser-based inferencing. The framework allows developers to import models previously trained offline in Python with Keras or TensorFlow SavedModels and then use them for inferencing or transfer learning in the browser.

Developers should implement client-based AI training within a broader focus on automating the machine learning pipeline end-to-end. When considered in that context, client-based approaches can play an important role in many established training workflows, such as:

  • Semisupervised learning: This is an established approach for using small amounts of labeled data — perhaps crowdsourced from human users in mobile apps — to accelerate pattern identification in large, unlabeled data sets, such as those ingested through IoT devices’ cameras, microphones and environmental sensors.
  • Synthetic training data: This involves generating artificial training data as well as the labels and annotations needed for supervised learning, perhaps by crowdsourcing CPU cycles, memory and storage from client devices.
  • Reinforcement learning: This involves building AI modules — such as those deployed in industrial robots — than can learn autonomously with little or no “ground truth” training data, though possibly with human guidance.
  • Collaborative learning: This involves having distributed AI modules — perhaps deployed in swarming drones — that collectively explore, exchange and exploit optimal hyperparameters, thereby enabling all modules to converge dynamically on the optimal tradeoff of learning speed versus accuracy.
  • Evolutionary learning: This involves training a group of AI-driven entities — perhaps mobile and IoT endpoints — through a procedure that learns from aggregate of self-interested decisions they make, based both on entity-level knowledge and on varying degrees of cross-entity model-parameter sharing.

Though client-side training can save time in some AI DevOps scenarios, it would be a gross oversimplification to claim that this approach can greatly reduce the elapsed training time on any job. Accelerating a particular AI DevOps workflow may require centralization, decentralization or some hybrid approach to preparation, modeling, training and so on. For example, most client-side training depends on the availability of pretrained — and centrally produced — models as the foundation of in-the-field adaptive tweaks.

Here’s an excellent video tutorial on developing and training server-side versus client-side AI:

Image: geralt/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU