At I/O, Google debuts new Android O, AI chip, new Assistant and Photos features, VR headset
Updated with news and analysis:
When Google Inc. held its last I/O developer confab a year ago, it looked like the tech giant had fallen behind rivals such as Amazon.com Inc., Facebook Inc. and even Microsoft Corp. on many fronts, from intelligent assistants and virtual reality to messaging and even mobile advertising.
Not anymore. A flurry of announcements at I/O 2016, such as a mobile VR platform, two new messaging apps and the Google Assistant digital helper, plus a huge hardware event in October that produced Google Home, the Daydream VR device and two new high-end Pixel smartphones, vaulted Google back into the game. Meantime, it has managed to keep its machine learning and artificial intelligence technologies more than competitive. And while its cloud computing platform still lags behind Amazon Web Services and Microsoft Azure, it’s gradually gaining new and larger customers as it adds more new services.
Back in fighting form, Google didn’t need to dazzle at this year’s I/O conference so much as show how it’s continuing to run with most of those initiatives–which it did. Still, it had some surprises, including a coming virtual reality headset that doesn’t need a smartphone and a new chip for training machine learning models, as well as a raft of new features for Photos, Assistant and Home. Not least, the new Android O mobile operating software is now available in a full beta test and will be ready sometime this summer.
Below are running highlights of the announcements as they came fast and furious. You can also watch the keynote here:
Google Chief Executive Sundar Pichai takes the stage to show off Google’s customarily awesome user numbers. Android, he says, has now crossed more than 2 billion active devices.
“Computing is evolving again,” he says, and AI and machine learning is sweeping across everything from search and Maps to video calling. Today, the Smart Reply automated email responses powered by machine learning are rolling out to all Gmail users this week.
Humans are interacting with computing in more natural ways, he says. Word error rate on speech recognition has gone from 8.5 percent last July to 4.9 percent today.
“We are clearly at an inflection point for vision,” he says, so today the company is announcing Google Lens, a way to take a photo of something like a flower and Google will tell what the flower name is. You can also point your phone at, say, the barcode on a router and get the instructions for it.
He’s not done with AI yet. Pichai says there’s a new version of Google’s Tensor Processing Units, the custom chips it makes for machine learning. Last year, when they were introduced in Google’s own data centers, they were only for “inference,” which is providing the real-time analysis for deep learning. Now, a second-generation chip, with 180 teraflops of speed, a new TPU can handle the much more computationally intensive training, which is training the algorithmic models for deep learning. Today, they’re now available on Google’s cloud computing service.
As the first TPU was last year, this one’s another shot at the graphics processing units mainly made by Nvidia Corp., which introduced its own new machine learning GPUs last week. Patrick Moorhead, president and principal analyst at Moor Insights & Strategy, said that’s a step forward, but it’s likely that the TPUs are going to be useful for relatively limited types of workloads, unlike GPUs that can handle a wide variety of tasks.
“Google’s use of TPUs for training is probably fine for a few workloads for the here and now, but given the rapid change in machine learning frameworks, sophistication, and depth, I believe Google is still doing much of their machine learning production and research training on GPUs,” Moorhead said. “It’s unclear to me what Google is actually getting out of their TPU journey beyond a positioning point of why their AI is better because they offer a TPU.”
Another new service: AutoML, which is neural networks training other neural networks, with the goal to make it easier for machine learning to be used by non-experts. We’re already approaching the state of the art in image recognition with this, he says.
Health care is one focus of Google’s AI efforts. Work on biopsies for tumors is improving quickly, he says.
And of course AI is being applied to search, Google Assistant being a prime example. Scott Huffman, a vice president of engineering at Google, says the company is starting to get “really good” at being conversational. “We’re really starting to crack the hard computer science of conversationality.” And Assistant can distinguish between voices.
As of today, he says, Assistant now has the ability to understand typed queries when you’re too shy to talk to it, such as in a public place. Google Lens can be used with Assistant as well, for instance to analyze the marquee of a theater that has a band name listed, and provide info on the show and even buy tickets for it.
And as expected, Assistant is now available on the iPhone. Hello, Siri. Other devices soon will offer it too through a software development kit.
More news: French, German, Japanese and a few other languages are coming on Assistant this summer, with Spanish and others by the end of the year.
Valerie Nygaard, senior product manager for search and Assistant, comes on to talk about how developers can leverage Assistant. Actions on Google, which allows developers to engage with customers in various ways via bots, will support transactions as of today.
Now it’s on to Google Home, with Rishi Chandra, vice president of product management for the device. For one, it’s expanding to more countries, such as the U.K. He also announced four new features for Google Home:
* Proactive assistance: It will tell you you need to leave in 15 minutes to get to an appointment.
* Hands-free calling: You can call anywhere in the U.S. and Canada for free, by saying, “Hey Google, call mom.” He actually calls him mom, who wonders why he didn’t call three days ago for Mother’s Day. It’s personalized by voice. It’s rolling out in the U.S. in the next few months. Hello, Amazon Echo, which just announced something similar.
* Your favorite entertainment: Spotify will add its free music service to Google Home, for one. Also will add support for Soundcloud and Deezer. And Bluetooth support will be added so you can play from other devices.
* Visual responses: If you want to view directions, you can say to Home, “OK Google, let’s go,” and a map will appear on your phone. Chromecast also will be updated to show visual responses on your phone, such as your calendar. And you can ask it to play a show on your TV.
Yet another new feature: Photo Books, as expected. It can be done in minutes from the phone, Google says. You can search for photos of your wife and kids and select them to print to a book–$9.99 for a softcover book, $19.99 for a hardcover one. Google will choose, say, 40 of the best. Soon, machine learning will be applied to suggest custom books, like from a trip.
Google Lens can also be used after you’ve taken a photo, so it’s getting added to Google Photos as well as Assistant. You can identify, say, buildings or paintings in your photos. Or you can tap a phone number in a photo to call the business.
Time for YouTube now, with Susan Wojcicki, the unit’s chief executive, and product managers to talk about new features. Sarah Ali, head of product for YouTube Living Room, introduces 360-degree video in the living room, not just on the phone or in a VR headset. YouTube also is introducing 360 live in the living room.
Barbara Mcdonald comes on to talk about Super Chats, introduced in January, that allows YouTube creators to interact with people on their live streams with comments. Now people can pledge money to make something happen, such as throwing water balloons at the hosts. At least I think that was the point, but Macdonald’s over-the-top enthusiasm was profoundly distracting.
Finally, the festivities move on to Android. Dave Burke, engineering vice president for Android, says Android O is “very much a work in progress,” but it will be available this summer.
A couple of new things in Android O: “Fluid experiences” and “vitals.” On the former, there’s picture-in-picture, which allows you to watch Netflix in the background, for instance, without moving out of the app you’re in. Another new feature: Notification dots. You can long-press the app icon that has a dot on it and immediately see it. There’s also autofill, which has now been extended to apps, so you can log into a particular app much faster. One more thing: smart text selection to make copy-and-paste easier. You can double-tap on entities like names and the whole phrase will be selected (pictured). If it’s an address, you can get to a map right away.
Burke also announces TensorFlowLite, a version of its deep learning framework that can run right on the phone in real time.
Next up: “vitals,” to keep your phone secure and healthy. First, security enhancements: Google Play Protect will scan all apps to find problems. Second, OS optimizations: Both the boot time and apps are now up to twice as fast. Apps that run in the background are also a problem for battery life, so there will be smart automatic ways to reduce those problems. Afraid I missed the third one. But here’s a bonus video about Protect:
One more thing: Android is adding a new programming language, Kotlin, which gets big cheers from the crowd.
Burying the lede, Google also says the first beta release of Android O is available today.
A new experience for entry-level Android devices called Android Go is intended to be more appealing to the “next billion users,” that is, those in countries where most people use entry-level devices with less memory and have limited data bandwidth. The idea is that Android Go makes apps works better on as little as 1 gigabyte of memory.
Google apps also will limit use of data on apps to the extent they can. YouTube Go is one example of a less data-intensive app. You can see how much data a particular video will use.
The first devices with these capabilities will ship next year.
Perhaps saving the best for last, it’s time to hear about what’s coming in virtual and augmented reality, with VR chief Clay Bavor. The Samsung Galaxy S8 will add Daydream VR support this summer, for one. “About time,” analyst Moorhead mutters next to me.
An entirely new kind of VR device is coming to Daydream too — a standalone VR headset. No cables, no phone, etc. Google is working with manufacturers on it, Bavor says. Some improvements in Daydream: Worldsense, which tracks where you are in space much better.
Google is working with Qualcomm on a reference design for this headset. HTC Vive and Lenovo are working with Google on the devices, which will arrive later this year.
Now on to AR: A new Google Tango AR phone from Lenovo is coming out this year too. Another new feature: Visual Positioning Service, sort of an indoor GPS, makes it easier to find where to go in a store, for example. Precise location will be critical for camera-based interfaces such as Google Lens, he says.
For the wrap-up, it’s back to Pichai, announcing a new initiative: Google for Jobs will use machine learning to help match people with openings better. One element is a cloud jobs API, or application programming interface. It’s like other search challenges Google has faced in the past, Pichai said.
Despite the frequent talk, not entirely unjustified, about how Google is making the world better, many of these services will end up being in service to Google’s main business. “All of this new information you’re giving these systems are going to be used to build profiles for advertising,” Moorhead noted.
Photos: Robert Hof
Since you’re here …
Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!
Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.
… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.