Google ups the ante in the machine learning wars

Google ups the ante in the machine learning wars

Google Inc. is shaking up the open-source ecosystem. In November, the search giant released the code for an internally-developed machine learning engine that prompted nearly half a dozen other web giants to share their own model-building tools. And now it’s taking the fight deeper into the development life cycle with the launch of a complementary framework for automating algorithm deployment.

TensorFlow Serving provides the ability to encapsulate the code of a machine learning model in a high-level abstraction Google refers to as a “Serviceable”. Each such construct carries a version identifier that the framework uses in order to determine how to handle its payload. A developer can choose to have an instance automatically replaced when a newer iteration is available, or first run the two side-by-side for a while in order to compare their performance. If no issues are found, then the rollout may be allowed to proceed.

The functionality could go a long way towards simplifying the management of large-scale machine learning projects that include a significant amount of componentry. TensorFlow Serving makes it possible to package each major part into its own self-contained Serviceable that can be updated separately from the rest of the workload, which is much easier than patching the entire code base at once. As a result, organizations are able to push out new features and enhancements for users considerably faster than they would otherwise be able to. It’s the same value proposition that has made Docker so popular over in the application development world, down to the fact that both frameworks place a heavy emphasis on performance.

Google says that TensorFlow Serving was able to handle about 100,000 queries per second during an internal benchmark test carried out on a 16-core cloud virtual machine in its public cloud. Much the system’s speed can be credited to the fact that it’s written in C++, which allows for more efficient use of hardware resources than higher-level languages like the search giant’s own Go. On the flip side, however, the syntax is also more complicated, a fact that will make it harder for organizations to customize the source-code according to their requirements.

RELATED:  Red Hat ships out OpenShift container platform, pushes new Docker initiative

That’s a fairly big factor considering that TensorFlow Serving only supports Google’s own machine learning engine on launch. Third party integration will be a key requirement for enterprises that have already standardized their model-building efforts on a competing alternative.

Image via blickpixel
Maria Deutscher

Maria Deutscher

Maria Deutscher is a staff writer for SiliconANGLE covering all things enterprise and fresh. Her work takes her from the bowels of the corporate network up to the great free ranges of the open-source ecosystem and back on a daily basis, with the occasional pit stop in the world of end-users. She is especially passionate about cloud computing and data analytics, although she also has a soft spot for stories that diverge from the beaten track to provide a more unique perspective on the complexities of the industry.
Maria Deutscher

SIGN UP FOR THE SiliconANGLE NEWSLETTER!

Join our mailing list to receive the latest news and updates from our team.

SIGN UP FOR THE SiliconANGLE NEWSLETTER!

Join our mailing list to receive the latest news and updates from our team.

Submit a Comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Share This

Share This

Share this post with your friends!