UPDATED 16:44 EST / OCTOBER 11 2021

AI

Microsoft’s AI-powered Translator service now supports 100+ languages and dialects

Microsoft Corp. today added support for 12 languages and dialects to the Translator service in its Azure public cloud, which uses artificial intelligence to translate text automatically.

The languages and dialects that Microsoft has added are Bashkir, Dhivehi, Georgian, Kyrgyz, Macedonian, Mongolian (Cyrillic), Mongolian (Traditional), Tatar, Tibetan, Turkmen, Uyghur and Uzbek (Latin). The update represents a notable milestone for Translator: It brings the total number of languages and dialects supported by the service to more than 100. That’s up from about 80 at the start of the year.

Microsoft enables users to access Translator’s automatic translation features in several ways. The service is available to enterprises through an application programming interface in the company’s Azure public cloud. Consumers, meanwhile, can access Translator via the Bing search engine’s built-in translation tool or through the standalone translation apps that Microsoft offers for iOS and Android. An integration is available for the Office suite of productivity applications as well.

The Azure version of Translator offers features not included in the consumer implementations. Companies can customize the AI models powering the translation service by adding support for industry-specific terms such as product names. Microsoft says one of the companies using Translator, Volkswagen AG, translates about 1 billion words every year across more than 60 languages. 

On occasion of today’s update, Microsoft shared new details on how it’s working to enhance the neural networks powering Translator. The company’s researchers have developed a multilingual AI model dubbed Z-Code that allows neural networks optimized for different languages to learn from one another in order to improve their accuracy. The ability to reuse certain information reduces the amount of training data that Microsoft must assemble to develop new AI models.

The process of sharing knowledge between neural networks is known as transfer learning. Optimizing a neural network to perform two similar but separate tasks, such as determining the topic of books and the topic of scientific papers, normally requires supplying the neural network with two separate training datasets, one for each task. In theory, transfer learning could make it possible to accomplish two related tasks with just a single training dataset, which would significantly simplify AI development.

Researchers haven’t yet mapped out exactly how transfer learning works. Microsoft’s efforts in this area could help advance the field and, along the way, enable the company to more effectively compete with Google Translate. Like Microsoft, Google LLC offers a cloud-based version of its translation service for enterprises alongside the consumer editions. 

Microsoft is also developing new AI technology to enhance its other services, including Bing. The company earlier this year detailed its work on MEB, a neural network with 135 billion parameters that it had deployed in Bing to deliver better search results for users. 

Photo: Microsoft

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU