Camera technology in smartphones has progressed by leaps and bounds in recent years. This is a blessing for photography enthusiasts and professionals alike. The camera in your pocket helps you capture the moment even when you don’t have your photography gear with you. Modern smartphone cameras have heaps of features built into them. But have you ever wondered how that tiny camera module can do so much? Photographer Ted Forbes explains how smartphone cameras work:
The lens is one of the basic elements of smartphone photography. Smartphones have small lenses with fixed focal lengths.
Unlike your DSLR or mirrorless camera system, you can’t change the lenses on your smartphone. To address this issue, manufacturers have started including multiple camera modules in their smartphones that usually cover the standard, wide, and telephoto focal lengths.
Also, due to the small size of the lens, little light can pass through it, which makes the low light performance poor when compared to a proper camera system.
Every lens in a smartphone camera is coupled with a sensor. The sensor records the light coming in from the lens in digital format, just like any other digital camera. Like lenses, the sensors in a smartphone camera are tiny. This is another reason why smartphone cameras don’t have as shallow a depth of field. The small sensor size is another factor that hinders the low-light performance of smartphone cameras.
“Computational imaging is basically a fancy term for being able to utilize the computer part of your phone with a series of algorithms that will hopefully overcome some of the shortcomings that you have with having a very small lens and a very small sensor.”
Think of computational imaging as the camera software in your smartphone that processes the image captured by the camera module to deliver you the best result. For instance, in low light scenarios, the smartphone camera switches over to “night mode.” This mode attempts to use a slower shutter speed or bump up the ISO and get rid of the digital noise. This is a basic example of computational photography, but cameras today are far more advanced than that.
You’ll notice an advanced application of computational imaging in Google’s Pixel smartphone cameras. For instance, the “Night Sight” feature in Pixel’s camera app helps you take brighter, shake-free photos even when you handhold the phone during a long exposure. It does that by first measuring how shakily the phone is being held or whether it’s on a tripod. This helps it to decide how many exposures it needs to take. It also uses optical image stabilization (OIS) and then finally creates a composite of the multiple exposures. The final image is well-exposed, has vibrant colors, and is sharp.
Most mobile shooters are happy with what they get straight out of their phone cameras. However, if you like, you can shoot in raw as well. Many high-end devices support raw photography. This gives you control over the final look of the image. And while this feature is supported natively by the phone’s camera application, the iPhone, for instance, can shoot in raw using third party apps only.
If you shoot raw, you can use applications like Snapseed or Lightroom mobile to process the images on your smartphone.
Will Smartphones Replace Cameras?
While smartphone cameras produce excellent results, you can rest assured that smartphones won’t be totally replacing cameras for professional work. Imagine, as a photographer, showing up at a wedding with just your smartphone. You might be able to get some good images, but they can’t beat the ones taken with a proper camera.
The question now is, when will computational photography make its way into cameras? If algorithms can make up for the shortcomings of a small smartphone camera, imagine the possibilities of using them in a full-fledged camera!
Like This Article?
Don't Miss The Next One!
Join over 100,000 photographers of all experience levels who receive our free photography tips and articles to stay current: