UPDATED 01:25 EST / SEPTEMBER 08 2017

APPS

Google demonstrates natural language processing and context improvements for Assistant at GDD Europe

Google Development Days Europe took place in Krakow, Poland this week. The event doesn’t attract much attention compared with Google Inc.’s much larger and popular I/O conference, but the company did announce some improvements for Google Assistant and showcased some Lens features.

Google recently uploaded its keynote from day two of GDD, presented by Behshad Behzadi, distinguished engineer and senior engineering director at Google, whose team is focused on the development of Google Assistant. During the demonstrations, Behzadi showcases the natural language processing and context improvements for Assistant and Lens

The company’s personal digital assistant has received updates since it was launched at the “Made by Google” event in October, but no details have emerged about Lens after it was announced at I/O in June. Although the Lens features only take up a small portion at the end of the keynote, it does demonstrate some of the impressive features that Lens has to offer.

No details were given of when users will get access to the new Assistant improvements, but the release of some of the features may coincide with the launch of the Google Pixel 2, expected to be announced on Oct. 5.

Google Assistant

In addition to the updates mentioned below, Google Assistant is also able to answer questions faster, leverages Google Search better when answering questions and can more accurately understand voice recognition in noisy environments.

Improvements in natural language processing

Thanks to the merging of Google Search and machine learning, Google Assistant is starting to understand more complex natural language processing.

Based on an elaborate description of a Tom Cruise movie provided by Behzadi, Google Assistant was quickly able to provide the correct movie title and read out a description.  

Stored preferences expanded

Google Assistant’s existing stored preferences’ features already allow you to store your favorite sports time or set Home as a referenced location.

Behzadi demonstrated how you can also store custom preferences related to weather, whereby he set a temperature preference for swimming in Lake Zurich. He then asked Google Assistant whether he would be able to swim this weekend and it responded with an answer of “No” due to the temperature being lower than 25 degrees celsius.

Better understanding of a question’s context

Google also showcased the Google Assistant’s improvements in understanding the context of questions. When Behzadi asked to see pictures of Thomas, with no prior context, Assistant displayed pictures of Thomas the Tank Engine. After requesting to see a team roster for FC Bayern Munich, Google Assistant then pulled up pictures of the team’s Thomas Müller when asked again.

Behzadi also showcased how Google Assistant is able to answer a series of follow up questions about, in this case, the Empire State Building, after only mentioning it once in the initial question.

Integrations with Translator and other apps

With the new “be my translator” mode, Google Assistant will be able to translate natural language statements into another language and then say the translated statement out loud. In the demonstration, Behzadi tells Google Assistant “be my Vietnamese translator” and then proceeds to get various statements translated, a great feature for people traveling in foreign countries where you may need to get directions, report lost luggage and the like.

An integration with Google Street View, can “teleport” (codename for the project internally at Google) a user to a specific location. You can ask Assistant to take you to a specific place, in this case, the top of Eiffel Tower, and the Street View will open in the specified place.

You can watch the improvements that have been made to Google Assistants in the keynote’s  live demonstrations here:

Google Lens

At the end of the keynote, Google also demonstrated how Lens, integrated with Assistant, can provide contextual information based on an image and answer relevant questions.

Google Assistant, together with Lens, provided the calorie content of a photographed apple. Behzadi also demonstrated a currency conversion feature, which could convert photographed Polish Zloty to Swiss Francs when asked.

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU