Google and MIT develop AI that improves pictures before you even take them
Most camera apps touch up photos after you take them, but Google Inc. and the Massachusetts Institute of Technology have developed a new machine learning algorithm that makes your pictures better before you even take them.
This week at Siggraph, an annual computer graphics conference, researchers from MIT and Google demonstrated a new camera system that uses artificial intelligence to display retouched images in real time. This allows photographers to see what the algorithm will do with their picture before they even take the shot. Even more impressive, the system is so efficient that it can run on a standard smartphone.
According to MIT News, the photographer AI builds on an existing project by the same MIT researchers that sent low-resolution versions of images to a server that then sent back instructions telling the phone how to retouch the image. The idea behind this process was to offload the difficult computing work to the cloud without having to use high amounts of data to transfer the image back and forth. The new system created by MIT and Google manages to cut out the middleman and run the process entirely on a user’s phone.
“The idea was to do everything we were doing before but, instead of having to process everything on the cloud, to learn it. And the first goal of learning it was to speed it up,” explained Michaël Gharbi, an MIT graduate student and one of the researchers who worked on both the original project and the new system.
To train the AI, the researchers used more than 5,000 images that had been retouched by five professional photographers. They also fed the AI thousands of pairs of images showing a before-and-after of various automated image processes, such those used to create high-dynamic-range images.
Jon Barron, a senior research scientist at Google, told MIT News that the new machine learning-powered system offers some exciting possibilities for mobile photography.
“This technology has the potential to be very useful for real-time image enhancement on mobile platforms,” said Barron. “Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones. This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience.”
Retouching cellphone images may seem like a somewhat frivolous use of AI, but the researchers’ project demonstrates just how much machine learning can accomplish on even the weakest computing systems.
You can watch a video explaining the new photography system below:
Image: Michael Gharbi via YouTube
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU