Google Inc. is shaking up the open-source ecosystem. In November, the search giant released the code for an internally-developed machine learning engine that prompted nearly half a dozen other web giants to share their own model-building tools. And now it’s taking the fight deeper into the development life cycle with the launch of a complementary framework for automating algorithm deployment.
TensorFlow Serving provides the ability to encapsulate the code of a machine learning model in a high-level abstraction Google refers to as a “Serviceable”. Each such construct carries a version identifier that the framework uses in order to determine how to handle its payload. A developer can choose to have an instance automatically replaced when a newer iteration is available, or first run the two side-by-side for a while in order to compare their performance. If no issues are found, then the rollout may be allowed to proceed.
The functionality could go a long way towards simplifying the management of large-scale machine learning projects that include a significant amount of componentry. TensorFlow Serving makes it possible to package each major part into its own self-contained Serviceable that can be updated separately from the rest of the workload, which is much easier than patching the entire code base at once. As a result, organizations are able to push out new features and enhancements for users considerably faster than they would otherwise be able to. It’s the same value proposition that has made Docker so popular over in the application development world, down to the fact that both frameworks place a heavy emphasis on performance.
Google says that TensorFlow Serving was able to handle about 100,000 queries per second during an internal benchmark test carried out on a 16-core cloud virtual machine in its public cloud. Much the system’s speed can be credited to the fact that it’s written in C++, which allows for more efficient use of hardware resources than higher-level languages like the search giant’s own Go. On the flip side, however, the syntax is also more complicated, a fact that will make it harder for organizations to customize the source-code according to their requirements.
That’s a fairly big factor considering that TensorFlow Serving only supports Google’s own machine learning engine on launch. Third party integration will be a key requirement for enterprises that have already standardized their model-building efforts on a competing alternative.
Image via blickpixel
Latest posts by Maria Deutscher (see all)
- IBM buys India’s Sanovi to bolster hybrid cloud capabilities - October 27, 2016
- Google unveils new Material Design toolkit for app designers - October 27, 2016
- CoreOS launches technical preview of OpenStack deployment tool - October 26, 2016