How client-side training is moving from the fringes to the center of AI development
Model training is where artificial intelligence models are readied for production deployment.
Traditionally, machine learning, deep learning and other AI models are trained in clouds, server clusters and other high-performance computing environments. However, Wikibon has recently noticed a surge in AI training environments that operate at the network’s edge. In other words, these environments evaluate AI models’ fitness through hardware and software resources resident in mobile devices, smart sensors, Web browsers and other client-side platforms.
More often than not, client-side training can’t produce AI models that are as accurate in their inferential tasks, such as prediction, classification and the like, as those trained in centralized data environments. But client-side training often has a countervailing advantage: continually updating the AI models in each edge node in keeping with the specific data being sensed by that node and optimizing the specific tasks executed at that location.
In that sense, client-side training can be an accelerator of AI-model learning within distributed edge clouds. As more IoT, mobile and other distributed application environments adopt client-side AI training, I see the following practices come into the mainstream of AI DevOps pipelines:
- On-device training: Client-side training enables apps to ingest freshly sensed local data and rapidly update the specific AI models persisted in those devices. As this article notes, device-side AI training is already standard in many iOS applications, such as ensuring that Face ID recognizes you consistently, grouping people’s pictures accurately in the Photos app, tuning the iPhone’s predictive keyboard and helping Apple Watch learn your habitual patterns automatically from activity data.
- Transfer learning for incremental client-side training: A key accelerator for client-side AI training is transfer learning. This involves reusing any relevant training data, feature representations, neural-node architectures, hyperparameters and other properties of a existing models, such as those executed on peer nodes. This would appear to be how Neurala implements client-side AI training combining fast pretraining with incremental on-the-fly learning, so that local models can be incrementally updated in real time “without … the need to keep all training data to add new knowledge.”
Developers should implement client-based AI training within a broader focus on automating the machine learning pipeline end-to-end. When considered in that context, client-based approaches can play an important role in many established training workflows, such as:
- Semisupervised learning: This is an established approach for using small amounts of labeled data — perhaps crowdsourced from human users in mobile apps — to accelerate pattern identification in large, unlabeled data sets, such as those ingested through IoT devices’ cameras, microphones and environmental sensors.
- Synthetic training data: This involves generating artificial training data as well as the labels and annotations needed for supervised learning, perhaps by crowdsourcing CPU cycles, memory and storage from client devices.
- Reinforcement learning: This involves building AI modules — such as those deployed in industrial robots — than can learn autonomously with little or no “ground truth” training data, though possibly with human guidance.
- Collaborative learning: This involves having distributed AI modules — perhaps deployed in swarming drones — that collectively explore, exchange and exploit optimal hyperparameters, thereby enabling all modules to converge dynamically on the optimal tradeoff of learning speed versus accuracy.
- Evolutionary learning: This involves training a group of AI-driven entities — perhaps mobile and IoT endpoints — through a procedure that learns from aggregate of self-interested decisions they make, based both on entity-level knowledge and on varying degrees of cross-entity model-parameter sharing.
Though client-side training can save time in some AI DevOps scenarios, it would be a gross oversimplification to claim that this approach can greatly reduce the elapsed training time on any job. Accelerating a particular AI DevOps workflow may require centralization, decentralization or some hybrid approach to preparation, modeling, training and so on. For example, most client-side training depends on the availability of pretrained — and centrally produced — models as the foundation of in-the-field adaptive tweaks.
Here’s an excellent video tutorial on developing and training server-side versus client-side AI:
Since you’re here …
Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!
Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.
… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.