UPDATED 16:17 EST / SEPTEMBER 16 2020

AI

Google details RigL algorithm for building more efficient neural networks

Google LLC today detailed RigL, an algorithm developed by its researchers that makes artificial intelligence models more hardware-efficient by shrinking them. 

Neural networks are made up of so-called artificial neurons, individual mathematical operations implemented in code, that are linked together by software connections. These connections are what enable the artificial neurons to pass data to each other for processing. RigL makes AI software more efficient by fixing a common optimization issue in machine learning models: They often have more connections between neurons than what they strictly need.

The connections in an AI model effectively serve as data pathways and the data that the model processes usually passes through only a subset of those pathways. The others are left unused, unnecessarily taking up processor and memory resources. According to Google, RigL removes redundant connections by making strategic tweaks to a neural network’s structure during the training phase of development.

Google researchers put RigL to the test in an experiment involving an image processing model. It was given the task of analyzing images containing different characters.

During the model training phase, RigL determined that the AI only needs to analyze the character in the foreground of each image and can skip processing the background pixels, which don’t contain any useful information. The algorithm then removed connections used for processing background pixels and added new, more efficient ones in their places. 

“The algorithm identifies which neurons should be active during training, which helps the optimization process to utilize the most relevant connections and results in better sparse solutions,” Google research engineers Utku Evci and Pablo Samuel Castro explained in a blog post. “At regularly spaced intervals we remove a fraction of the connections.”

There are other methods besides RigL that attempt to compress neural networks by removing redundant connections. However, those methods have the downside of significantly reducing the compressed model’s accuracy, which limits their practical application. Google says RigL achieves higher accuracy than three of the most sophisticated alternative techniques while also “consistently requiring fewer FLOPs (and memory footprint) than the other methods.”

In one test, Google researchers used RigL to delete 80% of the connections in the popular ResNet-50 model. The resulting neural network achieved accuracy comparable to that of the original. In another experiment, researchers shrunk ResNet-50 by 99% and still saw a top accuracy of 70.55%. 

“RigL is useful in three different scenarios: Improving the accuracy of sparse models intended for deployment … improving the accuracy of large sparse models that can only be trained for a limited number of iterations [and] combining with sparse primitives to enable training of extremely large sparse models which otherwise would not be possible,” Evci and Castro detailed. 

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU