UPDATED 21:19 EDT / JANUARY 23 2019

AI

Facebook open-sources LASER encoder tech for natural language processing

Facebook Inc. has made another key contribution in its quest to accelerate the transfer of natural language processing applications to more languages.

The social media giant today open-sourced a new PyTorch tool called LASER, which stands for Language-Agnostic Sentence Representations. With LASER, Facebook is trying to create a kind of mathematical representation that can encapsulate and understand all natural languages, no matter how unique they may be.

The open sourcing of LASER follows the publication of a research report by Facebook in December. That report, “Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond,” describes how Facebook engineers trained a single neural network model that can represent the structure of 93 languages in 34 separate alphabets.

Facebook eventually built what it calls a “single representation,” or mathematical transformation of sentences in the form of vectors that encapsulates the structural similarities of all of the 93 languages. That single representation was then used to train algorithms on multiple tasks that involved matching sentences between pairs of languages it had never seen before, for example Swedish to Swahili, a method that’s known in the trade as “zero-shot” language learning.

Facebook researcher Holger Schwenk said the hope is that languages with “limited resources” will be able to benefit from the joint training of other, more popular languages so that natural language processing models can be built to understand them.

The code for LASER, available on GitHub, provides an “encoder-decoder” neural network that’s built using Long Short-Term Memory neural nets that are used to understand human speech and text.

The decoder works by trying to translate random sentences from one of the 93 source languages into English or Spanish. A sentence is fed into encoder LSTMs which then transform the words into a vector of a fixed length. Then, a corresponding LSTM tries to pick out a sentence in English or Spanish that matches the meaning of the original words.

Facebook said that by training the encoders on numerous bilingual texts, such as the OpenSubtitles2018 collection of movie subtitles available in 57 languages, it becomes more accurate at creating the single mathematical representation that allows it to correctly translate sentences.

Analyst Holger Mueller of Constellation Research Inc. said LASER looks to be a key contribution in the quest to better understand human speech in next-generation applications. However, he said it remains to be seen if early implementations of the tool can prove its validity.

“As a German native speaker who speaks a few other languages, abstracting sentence structure is no trivial task,” Mueller said.

Image: Facebook

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU