UPDATED 15:02 EST / MAY 28 2021

APPS

Google details code optimization tech behind Chrome’s latest 23% speed boost

Google LLC has shared technical information about two new code optimization technologies included in the latest release of Chrome that promise to improve web page loading times by up to 23% in some cases.

The search giant detailed the two technologies, called Sparkplug and “short builtin calls,” on Thursday. Both are implemented as part of the V8 engine that Chrome uses to load web pages’ JavaScript code. Web pages use JavaScript to power key features such as buttons and menus, which means that boosting the speed at which code written in the language runs can provide a significant overall performance improvement.

Chrome’s V8 JavaScript engine “executes over 78 years’ worth of JavaScript code on a daily basis,” Chrome product manager Thomas Nattestad wrote in a blog post. “Chrome is now up to 23% faster with the launch of a new Sparkplug compiler and short builtin calls, saving over 17 years of our users’ CPU time each day!”

Code written in programming languages such as JavaScript can’t run directly on a computer’s central processing unit, but rather needs to be turned into so-called machine code first. This is a task that Google’s engineers have speeded up with Sparkplug, the first of the two newly detailed technologies. It’s a compiler that transforms a web page’s JavaScript code into machine code and, in the process, performs optimizations to help the user’s computer load the web page faster.

Chrome already has a compiler that optimizes code to boost performance. However, that existing compiler takes a while to spring into action after a user opens a web page, which means there’s a time window in which web content has been loaded but isn’t yet running as far as it could be. Sparkplug provides a speed uplift in that time window so users can experience faster browsing even before Chrome completes all its optimizations.

The reason Sparkplug can start boosting code performance before Chrome’s existing optimization compiler activates is that it’s faster. That speed advantage, in turn, is the result of two specific software methods implemented by Google’s engineers. 

First, Sparkplug takes advantage of the fact that Chrome turns all web pages’ JavaScript code into an intermediary form called bytecode to ease processing. Sparkplug performs its optimizations on the bytecode rather than the original JavaScript code, which for various technical reasons is faster.

The second method with which Google sped up Sparkplug involved skipping one of the steps normally involved in the code optimization process. “Sparkplug doesn’t generate any intermediate representation (IR) like most compilers do. Instead, Sparkplug compiles directly to machine code in a single linear pass over the bytecode,” Google detailed in a technical blog post

“The lack of IR means that the compiler has limited optimisation opportunity” and it also makes it more difficult to add support for different types of processors, the company explained. “But, it turns out that neither of these is a problem: a fast compiler is a simple compiler, so the code is pretty easy to port [across processors]; and Sparkplug doesn’t need to do heavy optimisation, since we have a great optimising compiler later on in the pipeline anyway.”

The other new technology that contributes to the new Chrome release’s performance is called short builtin calls. The technology derives its names from builtins, which are pieces of code that run alongside the JavaScript code from a web page and perform various auxiliary tasks. Before the introduction of the feature, Chrome stored builtins and JavaScript code in randomly selected parts of a computer’s memory, which slowed down performance because of a technical detail related to how modern processors are built.

A computer’s memory is made up of segments that are often described in terms of their “distance” from one another. If the memory segment in which Chrome’s builtins run is far away from the segment containing a web page’s JavaScript code, which can occur often because their location is determined randomly, processors tend to take a longer time to load the web page. Google’s short builtins reduce the distance between the two segments to speed up computations. 

In practice, the technology achieves the speedup by reducing the need for processors to use their branch prediction mechanism. The branch prediction mechanism is a chip component that guesses future results of computations. By using the component less, Chrome reduces the risk of the chip making incorrect guesses that hold up processing and thereby improves performance. 

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU