UPDATED 12:13 EDT / MAY 18 2016

NEWS

At I/O, Google promises virtual assistant, home voice hub, chat and video apps

As much as Google Inc. keeps pushing into new and unexpected technology territory, it finds itself in an unusual position on several key tech battlegrounds as it opens its 10th annual I/O conference for software developers today: behind.

Google leads in some important foundational areas such as machine learning and retains a commanding lead in search. But it faces hard-charging competition in messaging, mobile advertising, virtual reality and home automation.

Starting at 10 a.m. Pacific today at the Shoreline Amphitheatre near the Googleplex in Mountain View, Chief Executive Officer Sundar Pichai and other executives are set to announce plans and products to fight back on all of those fronts and more, according to various reports and people close to the company. At the same time, Google can still surprise, and developers will be looking to see if it leapfrogs rivals such as Facebook Inc. and Amazon.com Inc. in some of those areas, such as messaging chat bots, voice-driven virtual assistants, VR headsets and 3-D positioning devices.

I’ll be liveblogging the planned 90-minute event, after which I’ll provide a wrap-up of the highlights. You can also watch the keynote here.

UPDATE post-keynote: Google managed to fulfill most, if not all, of the expectations ahead of the conference, announcing:

* Google assistant, a virtual assistant that will work both in apps and in a device called Google Home that will be available later this year.

* Android Instant Apps, which allow developers to break up their apps into modules so you can use parts of the app in others apps, such as messaging, without having to install the app.

* Allo, a new messaging app that will use Google assistant and automated replies to make communication faster.

* Duo, a new one-to-one video calling app. Allo and Duo will be available this summer.

* Daydream, a mobile VR platform. (No new VR headset yet, though it did show a reference design on the screen, if not in the flesh.)

* Firebase, which morphs from a mobile back-end as a service into a full mobile development platform.

* Android Wear 2.0, offering among other things a way for an Android watch to play music and perform other actions without an accompanying phone.

However, there’s a caveat for many of these products and services: They’re not available yet. The phrase “later this year” came up a lot at the keynote. If Google wants to keep up with, let alone surpass, its rivals, it will need to make sure it ships what it promises.

9:30 a.m.: Thousands of developers–7,000, actually–have now filed into the amphitheatre and are trying to talk above the deafening music.

10 a.m.: Musicians of some sort are playing strings up in towers, creating music and patterns of lines on the stage screens. Must be about to start…. Now a guy’s playing electronica onstage.

10:04: Sundar Pichai takes the stage. He says more than 3 billion people are on mobile devices worldwide.

pichai1-io16

Google CEO Sundar Pichai (Photo: Robert Hof)

 

Pichai talks about all the technologies and services Google is providing, from computer vision for Google Photos to speech recognition to machine learning, for services such as Google Translate. “We are at a seminal moment and believe we are poised to take a big leap forward,” he says.

And with that, he announces “Google assistant” (with a lower-case “a”). “We think of this as building Google for each individual user.” We’ve started getting truly conversations on searches because of advances in natural language understanding, he says. “Our ability to do conversational understanding is far ahead of what others can do.”

He says Google is just getting started on this. One example is a common situation: Friday night he wants to take family to the movies. You should be able to ask Google what’s playing tonight. Want to go a step further and suggest three to 11 movies you might like. I want to say we want to bring kids this time and get family-friendly suggestions. It might suggest Jungle Book, and I’d say sure, and the tickets would get bought.

This is just one version of the conversation, he notes. I could have asked, Is Jungle Book any good? Maybe Google could give me reviews and a trailer. Every single conversation and context is different… for billions of users. “We think of the assistant as an ambient experience that extends across devices and places.”

Now he turns to the home. We’ve already made devices for the home and we’ve thought about how to bring Google assistant to the home, he says. We’re looking at launching something later this year.

Mario Queiroz, a Google vice president of product management, comes on to provide an early preview of that. Google wants to make it useful and enjoyable. It can manage music throughout the house, manage everyday tasks. Anyone will be able to have a conversation with it. It can do a dialogue no matter where you are in the room.

We’re putting a lot of craftsmanship into the design, he says. No buttons. Here’s what it will do: Play music anywhere in the home since it will work with other speakers. You can say “play in the living room.” You can control video content too.

And it will become more and more of a control center for home tasks, Queiroz says. You’ll be able to control home lighting, call an Uber, etc. (In other words, we do everything Amazon’s Echo does and more!)

What really distinguishes it, though, is that it has search built in. You could ask, “What was the U.S.’s population when NASA was formed?” It can help you retrieve your travel itinerary, he says. “As Google keeps getting better, so will Google Home.” A video shows a wide range of other activities it’s intended to handle, such as notifying you of flight changes and allowing you to change a dinner reservation time, not to mention waking up your kids with lights and music.

Google Home (Photo: Robert Hof)

Google Home (Photo: Robert Hof)

Pichai returns with a new take on Google Photos, handing it over to Eric Kay, who announces two new communications apps. (Google doesn’t have enough already? Apparently not.)

The first is called Allo. It makes conversations more productive by bringing stuff from the Google assistant into your chats. One feature is “whisper shout.” It lets you express how you feel by chat replies big or small.

Google has also built Smart Replies into chats. That’s the feature that allows you to automate some email replies. Allo learns over time and suggests responses unique to you. “You can say the things you want without having to type a single word,” he says. It also offers Smart Replies when people send photos to you because Google can understand images and even the context.

The Google assistant also helps Allo do other things, like suggesting a restaurant when a friend sends a photo of clam linguine, and also make a reservation through OpenTable. (We can do Facebook M even better!) You can also play games such as Emoji Movies inside Allo. The idea is that developers will come up with a lot more possibilities.

Google has also created an incognito mode for Allo chats. You can do private notifications and decide how long chats will be saved. There’s also end-to-end encryption. Allo will be the first home for the Google assistant.

Now he turns to video calling. He introduces Duo, a simple one-to-one calling app for everyone. It’s the video companion to Allo. “Knock knock” feature shows who’s calling and even why with a live preview. (We’re better than Apple FaceTime, too!)

Duo was built by the team that created WebRTC that powers much of online communication. The other thing that sets Duo apart is the reliability, according to Google, by monitoring video quality and degrading gracefully when bandwidth drops.

Both Allo and Duo will be available on Android and iOS devices this summer.

Dave Burke, vice president of engineering for Android, comes on to talk about … Android. He says it’s the most popular mobile software by number of devices, working on smartphones, tablets, watches and cars.

Android N is the newest version, announced in March with an early developer preview. Google is asking for suggestions on what N stands for. (Previously the letters had all been confections.) Just don’t name it Namey McNameface, Burke pleads.

Graphics and runtime are the two main improvements. Vulkan, the graphics technology, allows for ever more realistic animations. The Android Runtime essentially allows for much faster compiling, for faster app installation, reduced storage in app code size, and reduced battery consumption. He’s going into more arcane details on security, automatic updates, and the like that I’m not going to do justice to.

Productivity improvement is a big focus for N as well, such as using multiple apps at the same time. It can automatically move away apps you haven’t used for awhile. There are also split-screen and picture-in-picture modes, the latter designed for Android TV, not surprisingly.

Another new area for Android N: notifications. There’s a new direct-reply feature. You don’t need to launch an app to send a response to a notification. You can also block or limit notifications on each app.

And then there’s emoji. Android N supports the latest Unicode versions for more humanlike emojis.

Android N emojis (Photo: Robert Hof)

Android N emojis (Photo: Robert Hof)

Clay Bavor, who leads the virtual reality team at Google, takes the stage to talk about VR support in Android N, and what kinds of problems need to be solved for VR to work on mobile.

Daydream is the new platform for high-quality mobile VR. It will be available in the fall. Three parts to it: smartphones, headset and controllers, and apps themselves.

Smartphones will need certain specs to use Daydream. There’s a VR mode for Android N that will reduce latency. Samsung, LG, HTC and most of the phone makers will have phones that can use it this fall.

Headsets: Google has created a reference design, shared with several companies that will have the first versions this fall.

Controllers: Very small, just a couple buttons. Inside, there’s an orientation sensor.

Apps: Apps in the Play Store will be enabled for VR. Also Google Play Movies, Street View, Photos and YouTube.

Now we’re on to Android Wear, the watch and other wearables platform. The new 2.0 version includes some pretty fancy watch faces that look analog but also have health info, notifications from calls or messages and other features. You can also use your finger to write replies.

On the fitness front, there’s automatic activity recognition, and ways to enjoy music more easily even if you don’t have your phone with you. Strava and other fitness apps will track time and distance automatically when you start running. These features will be available to users in the fall, but developers can get a preview version now.

Google’s Android developer group provides some details on Android Studio and other stuff that essentially makes creating faster, better-looking apps more easily.

Google also announces an update to its Firebase mobile back-end as a service that turns it into a broad mobile development platform, including a new analytics service. You can read a detailed Firebase story here. It works across Android and iOS and it’s free.

Ellie Powers from the Android team offers a sneak peak at a new product that will roll out over the next year, a way to bring users more quickly into apps. You’ll be able to run apps with one tap, without an installation. It’s called Android Instant Apps.

So if you don’t have the Buzzfeed video app on your phone and you get a message from a friend that has a link to something in the app, you can see it. Android can tap only into the part of the app you need. The app is split up into modules to make this possible. Google Play downloads only the parts of the apps needed to use a particular part of the app. Access to developers will roll out gradually and users will see it later this year.

Google's Android Instant Apps demo at I/O (Photo: Robert Hof)

Google’s Android Instant Apps demo at I/O (Photo: Robert Hof)

Back to Pichai, who talks about opening up access to machine learning technology through its open-source project TensorFlow and new parser (called Parsey McParseface). Pichai says machine learning will be one of the big differentiators for the Google Cloud Platform.

He also shows a new “TPU” chip module that fits into a disk drive bay in computers. Essentially an add-on accelerator for running the TensorFlow machine learning algorithm, it’s much more efficient at certain machine learning tasks than CPUs or GPUs (graphics processing units, the current favorite of AI researchers).

And that’s a wrap.

There were at least two no-shows: the rumored VR headset (though there was Android VR support and a vague reference hardware design announced) and a new Nexus tablet. Also not on the keynote was the Tango 3-D positioning phone, probably because Lenovo is set to unveil it soon.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU