UPDATED 21:27 EDT / DECEMBER 19 2017

EMERGING TECH

Google has built a machine learning model that can decide if images are pleasing to the human eye

Researchers at Google LLC are taking image recognition to the next level with an experimental machine learning model that can rank photos according to their aesthetic appeal.

The new model, called Neural Image Assessment, is designed to assess which images humans are most likely to find aesthetically pleasing. This is quite different to existing models, which can only categorize images according to their quality. With NIMA, Google’s researchers says it’s now possible to rate images with a much higher degree of correlation to what humans would perceive.

The announcement was made in a Dec. 18 blog post co-authored by Google researchers Hossein Talebi and Peyman Milanfar, who say that their new algorithm can closely match the average scores given to images by more than 200 human raters.

NIMA could be applied to a variety of labor-intensive tasks that require a more subjective judgment, the researchers say. For example, possible applications include intelligent image editing, or software that can optimize visual quality and minimize any perceived errors.

NIMA was built using something called a convolutional neural network, which is a type of machine learning that’s used for image recognition and classification. However, unlike other models, which perform quality assessments of images according to things such as blurring, compression and pixel-level imperfections, NIMA is more focused on the characteristics humans associate with beauty and emotions, the researchers said.

CNNs work by using data that has been labeled and rated by humans to train models to identify the characteristics humans are likely to find aesthetically pleasing. This data includes reference images that are used to train the models on specific metrics associated with aesthetically pleasing images. In cases where there is no reference image available, statistical models are used instead.

image2

Example of images ranked by NIMA for their aesthetic value. (Source: Google)

So, rather than classifying images as either low or high quality, “NIMA model produces a distribution of ratings for any given image — on a scale of 1 to 10, NIMA assigns likelihoods to each of the possible scores,” Talebi and Milanfar said.

“This is more directly in line with how training data is typically captured, and it turns out to be a better predictor of human preferences when measured against other approaches,” said the researchers, who offer a more in-depth discussion of their work in a technical paper.

Image: Suju/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU