Google has thrown another new AI tool into its developer’s box in the form of its Cloud Vision API. The beta release of Cloud Vision, which had been available in limited preview since last December, is the latest in a flurry of AI-related announcements from Silicon Valley giants, as Google goes head to head with companies like Microsoft and IBM in a race to dominate this emerging niche.
Google’s Cloud Vision API brings the concept of machine learning to images for the first time. Using the tool, developers can build applications and robots that are capable of recognizing image content for the first time. For example, show it a picture of a banana and the bot will call it what it is. Alternatively, you could tell your robot to single out the smiling faces from those that are frowning, and it’ll give you the answer faster than you can click your fingers.
The software, which is also used by Google to power Google Photos, can detect or identify hundreds of different objects, colors and facial expressions in a given image, for example flowers, food, animals, notable landmarks and so on. There are other potential uses too, such as being able to detect inappropriate content (such as pornography) from crowdsourced images (as Google’s SafeSearch does), analyzing people’s emotions, detecting logos, reading text, and many more.
In a blog post, Ram Ramanathan, Product Manager of Google Cloud Platform, said the Cloud Vision API is available for anyone to submit and analyze their images during its unspecified beta timeframe. Users can submit up to 20 million images per month, with pricing dependent on your image’s volume and content detection requirements
Google claims that already, “thousands of companies” have used the API since it came out in preview last December, generating millions of requests for image annotations. One of its biggest users is the social photo editing app PhotFy, which relies on Cloud Vision to moderate over 150,000 photos on a daily basis, weeding out those that contain inappropriate content like pornography and violence.
The release of Cloud Vision into beta follows a promise from Google CEO Sundar Pichai last October that the company is going to prioritize its machine learning efforts this year. Google has already released its TensorFlow machine learning technology, which powers Google Search, to the open-source community.
One reason why Google is prioritizing its machine learning efforts is that the AI sector seems to be red hot at the moment, and just about every major tech company is trying to carve out a niche for itself. Google faces stiff competition from the likes of IBM and Microsoft in the race to create the best machine learning tools. For example, IBM open-sourced a rival to Google’s TensorFlow software called SystemML last November, while Microsoft recently released a bunch of its own machine learning tools onto GitHub, including software capable of recognizing people’s emotions based on their facial expressions in images.
The scramble to dominate AI is an interesting race in itself, and all the more so for developers and robotics creators, who can make use of a whole host of tools that didn’t exist just a few months ago.
Before joining SiliconANGLE, Mike was an editor at Argophilia Travel News, an occassional contributer to The Epoch Times, and has also dabbled in SEO and social media marketing. He usually bases himself in Bangkok, Thailand, though he can often be found roaming through the jungles or chilling on a beach.
Got a news story or tip? Email Mike@SiliconANGLE.com.
Latest posts by Mike Wheatley (see all)
- Pepperdata offers a free health check for Hadoop users - June 27, 2016
- Google tools up with its Spanner database, looks for a fight with AWS - June 27, 2016
- Intel looking to offload its Intel Security arm - June 27, 2016