

Though Amazon Inc. doesn’t make a habit of sharing its internally-produced software with the outside world, the large number of fellow web giants that had open-sourced their deep learning technology in recent months has apparently prompted a change of heart. And so the company quietly joined the fray yesterday with the release of a C++ library for developing neural networks that could make the task significantly faster than before for data scientists.
Dubbed DSSTNE, the framework owes its speed in large part to the parallelization mechanism that Amazon included under the hood to handle distributed processing. Most alternatives execute deep learning models by running separate copies of the code on each GPU at their disposal and synchronizing the activity using some sort of orchestration mechanism. Others will assign each major element of the algorithm to a different chip, which is slightly more efficient but still doesn’t make the most out of the available hardware. DSSTNE, in turn, implements an improved variation of the latter method to figure out the optimal number of calculations a given processor can handle and then distribute the load accordingly.
The resulting performance improvement becomes especially pronounced when the framework is used to analyze so-called sparse datasets that are missing a lot of details. Amazon optimized DSSTNE for handling partial information to help speed the creation of recommendation engines, which typically don’t have access to all the information they’re programmed to weigh. The product suggestion feature on the retail giant’s website, for instance, can’t take a visitor’s buying history into account if they’re not logged into their account. Meanwhile, search applications will be able to exploit the framework as well to deal with the semantic gaps that often appear in user queries.
Amazon also plans on adding support for image and speech recognition algorithms later down the road in an effort to broaden the appeal of DSSTNE even further. The library already poses a threat to existing deep learning engines: The retail giant claims that it outperformed Alphabet Inc.’s popular TensorFlow system by 2.1 times during an internal benchmark test.
THANK YOU