UPDATED 00:06 EDT / FEBRUARY 19 2016

NEWS

Google’s Cloud Vision takes image recognition to the next level

Google has thrown another new AI tool into its developer’s box in the form of its Cloud Vision API. The beta release of Cloud Vision, which had been available in limited preview since last December, is the latest in a flurry of AI-related announcements from Silicon Valley giants, as Google goes head to head with companies like Microsoft and IBM in a race to dominate this emerging niche.

Google’s Cloud Vision API brings the concept of machine learning to images for the first time. Using the tool, developers can build applications and robots that are capable of recognizing image content for the first time. For example, show it a picture of a banana and the bot will call it what it is. Alternatively, you could tell your robot to single out the smiling faces from those that are frowning, and it’ll give you the answer faster than you can click your fingers.

The software, which is also used by Google to power Google Photos, can detect or identify hundreds of different objects, colors and facial expressions in a given image, for example flowers, food, animals, notable landmarks and so on. There are other potential uses too, such as being able to detect inappropriate content (such as pornography) from crowdsourced images (as Google’s SafeSearch does), analyzing people’s emotions, detecting logos, reading text, and many more.

In a blog post, Ram Ramanathan, Product Manager of Google Cloud Platform, said the Cloud Vision API is available for anyone to submit and analyze their images during its unspecified beta timeframe. Users can submit up to 20 million images per month, with pricing dependent on your image’s volume and content detection requirements

Google claims that already, “thousands of companies” have used the API since it came out in preview last December, generating millions of requests for image annotations. One of its biggest users is the social photo editing app PhotFy, which relies on Cloud Vision to moderate over 150,000 photos on a daily basis, weeding out those that contain inappropriate content like pornography and violence.

The release of Cloud Vision into beta follows a promise from Google CEO Sundar Pichai last October that the company is going to prioritize its machine learning efforts this year. Google has already released its TensorFlow machine learning technology, which powers Google Search, to the open-source community.

One reason why Google is prioritizing its machine learning efforts is that the AI sector seems to be red hot at the moment, and just about every major tech company is trying to carve out a niche for itself. Google faces stiff competition from the likes of IBM and Microsoft in the race to create the best machine learning tools. For example, IBM open-sourced a rival to Google’s TensorFlow software called SystemML last November, while Microsoft recently released a bunch of its own machine learning tools onto GitHub, including software capable of recognizing people’s emotions based on their facial expressions in images.

The scramble to dominate AI is an interesting race in itself, and all the more so for developers and robotics creators, who can make use of a whole host of tools that didn’t exist just a few months ago.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU