I was really intrigued by the Pelican Imaging array camera solution being pitched for Smartphones and other compact photography or video applications. Since Pelican isn’t doing much talking right now, I did some thinking over the weekend trying to wrap my head on the concept and why it works. The problem that needs to be solved is that smartphone cameras or small point-n-shoot cameras have small image sensors with correspondingly small lenses, which means smaller light gathering capability. The less light being captured, the more grainy and noisy the resulting picture.
The knee-jerk reaction by some pundits is that all we need to do is put larger image sensors in these smartphones and cameras, but that isn’t a solution because larger image sensors require larger lenses. Using small lenses on large sensors is generally a waste of money since the smaller lens can’t provide enough light for the sensor to operate in higher quality modes with reasonably low ISO numbers.
Not only do the lenses have to be wider and taller, they also need to be proportionally deeper which exceeds the allowable thickness of a compact camera or smartphone. Most people have seen those big white lenses used by sports and newspaper photographers that are nearly the size of a telescope. It’s obvious that a lens of that size can’t be used on a compact camera much less a mobile phone.
Array cameras have been researched by institutions like Stanford University and is being commercialized by companies like Pelican Imaging. The basic idea is that by using an array of smaller sensors and smaller lenses, it still requires more total lens surface area but it eliminates the need for thicker lenses. That has significant ramifications for design thickness and weight. I created the following conceptual mock-up illustration using the Canon T2i (original image of the camera from Canon) which is the camera I own. Actual array cameras dispense with the body and used fixed zoom lenses since they aren’t needed, but the mock-up below gives a good idea of how the concept works.
Here we have one hypothetical giant camera that uses a lens that has 4 times the dimensions to satisfy an image sensor that is 4 times wider and taller. Because lenses increase in depth in addition to width and height, the mass and volume of the lens is cubed which means it is 64 times bigger. By using 16 separate normal size cameras in an array, we only increase mass and volume 16 times and the depth of the lenses doesn’t increase 4-fold. If we had an array of of 10×10 cameras, the camera lens would need to be 1000 times larger but 100 normal size cameras in an array would only be 100 times larger which means it is 10 times smaller than a single camera with comparable sensor area. If it is a 20×20 array, the combined lenses are 20 times smaller and 20 times thinner than a single giant lens.
The end result is a substantially smaller and thinner super high resolution camera, but one that is likely fixed zoom because the geometry of the array is impractical to change. Pelican Imaging didn’t disclose more than the following illustration highlighting their product’s advantage by using a 5×5 array of tiny cameras packaged into a much thinner and lighter smartphone.
The concept could be scaled to the point where image quality of a smartphone is good enough to even threaten higher end compact cameras. Smartphones are already a large source of images posted on photo sharing sites like Flickr because of their ubiquitous mobile Internet connectivity, higher quality photographs on smartphones could severely impact the camera industry.
In theory, higher end compact cameras could rival or even exceed DSLR optical quality but it would likely be impractical to implement variable zoom. Higher end DSLRs will likely continue to thrive because they’re more flexible for serious photographers and cinematographers (and because they produce that nostalgic “film look”), but the higher volume compact camera market could join the portable gaming consoles on the endangered species list.
[Cross-posted at Digital Society]