Google has just released a new, standalone voice search app for iOS devices to take on Apple’s flagship Siri personal assistant.
It’s not quite the same as Siri – at the moment, all it can do is perform Google searches and a few other odds and ends, unlike the system wide functionality of Apple’s software. However, it does offer a couple of very big benefits – compared to Siri, Google’s voice search app is far more accurate, and it’s lighting fast.
According to a review by Apple Insider, Google’s voice search leaves Siri for dust as far as speed and efficiency are concerned, and a little investigating uncovers exactly why that is.
Big Data = Less Mistakes
One reason why Google’s voice recognition software is so much faster is down to big data. Just this week, Google released a research paper detailing some of the behind-the-scenes work their data scientists have been doing to improve the technology. The paper goes into a lot of detail, but Google’s Ciprian Chelba provides a short summary in the Google blog, which helps to underline why everyone is so excited about big data these days.
The premise of Google’s big data-derived voice recognition research is that “more is better”, and so they’ve been busy creating enormous language models that help them to predict the next word that a speaker will say, based on the preceding words they’ve just spoken.
Chelba gives an example in his blog post, saying that Google’s language model attributes a much greater probability of the word “pizza” following the words “New York”, than say, “granola”. The idea is that the better the software can predict the likelihood of the next work, the more efficient and accurate it will be.
According to Chelba, the size of the language model is all important – their research shows that by doubling its size, the number of errors dropped by 10%.
Client-Side Architecture = Slick Response
Big data helps to cut down on errors, but it doesn’t really explain why Google’s voice search is so much faster than Siri’s.
That’s because Google cut down on its response time by using a totally different architecture to what Apple uses. Siri is essentially a server-side app, which means that requests are not processed by software on the phone, but instead sent off to a server in the cloud that does the processing for it. Google’s Voice Search, on the other hand, is a client-side app, which means that the actual device does the processing instead.
Apple adopted the server-side method because it allows for what’s called server-side learning, enabling the entire system to improve over time, but the disadvantages are obvious when it comes to handling longer queries.
But with Google Voice Search using big data to handle the accuracy side of things anyway, it looks like Siri could well find itself usurped from its position as king of the personal assistant apps. Unlike Siri, where a longer query almost always results in an interminable delay, Google’s voice app is able to begin processing even the most complex questions from the moment you begin speaking, delivering the answers you need in an instant.
Faster, slicker, and more accurate – looks like Google’s got this one zipped.
Before joining SiliconANGLE, Mike was an editor at Argophilia Travel News, an occassional contributer to The Epoch Times, and has also dabbled in SEO and social media marketing. He usually bases himself in Bangkok, Thailand, though he can often be found roaming through the jungles or chilling on a beach.
Got a news story or tip? Email Mike@SiliconANGLE.com.
Latest posts by Mike Wheatley (see all)
- Foreign firms “not ready” to relocate as Russian data laws come into effect - September 1, 2015
- VMware, Rackspace build interoperable OpenStack cloud| #VMworld 2015 - September 1, 2015
- VMware intros “Microvisors” in new container embrace | #VMworld 2015 - September 1, 2015