What on earth is computational photography? The simple answer is that it is being developed to try and ensure that smartphone cameras can achieve what their big brothers can–and perhaps even more.
As has often been mentioned, the major restriction for smartphone cameras is the lack of space. If they are really to be ‘the camera in your pocket’ they need to be pocket sized. The addition of extra cameras (zoom, wide angle, portrait, etc.) has been an absolute boon for smartphone manufacturers, as they have improved the versatility and performance of the cameras beyond expectations.
One camera, two cameras, three cameras, four cameras, five. Sounds like a version of the old one potato, two potato game. But this is no game; the cameras provide the hardware and computational photography provides the software to make it all work and take great images.
Because of the space limitations inside the camera stacks, there is no room to make the lens any bigger which would allow it to collect more light. The challenge then is how to do more with the light that is collected.
One solution which is in common with standard cameras is how to increase the ISO that the sensor can handle without a similar increase in the digital noise that can result.
Now all digital cameras have to use a form of computational photography to convert the digital signal from the sensor into a usable image.
Computational photography is also used for key actions, such as auto-focus and object tracking. But what they are doing is actually fairly basic when one considers the real potential. The key to the development of true smartphones was the increased power of modern small processors.
This really opened the way for mobile phone manufacturers to start harnessing the power of computer coding to do amazing things with the light that was being collected by the cameras, both single and now multiple. The challenge was how to compete with standard cameras in image quality with tiny sensors that can be up to 10X smaller.
The answer is computational.
The key fact is that the data coming from a digital sensor is not a snapshot in the traditional camera sense; it is a stream of data that flows for as long as the sensor is exposed to light. To capture an image the smartphone just selects an arbitrary chunk of the data stream, and this equates to the shutter speed.
With modern chips, the smartphone can capture parts of the stream in addition to the actual image and this gives the opportunity to add context. Context can contribute all manner of things, some of which are only in the early stages of development. It can be photographic elements like the lighting and distance to the subject. But it can also be used to link multiple cameras to produce special effects. We all know about image blending of differently exposed images to make use of HDR techniques. With context, the images in the stream can be manipulated and blended intelligently to produce instant HDR.
With some cameras, this is being done continuously to allow for HDR on demand, for example.
Having multiple cameras together with context has made an array of new features possible—zoom, better HDR, portrait modes, 3D, and low-light photography—but it has also presented new challenges for phone makers but they have improved the photo experience for phone owners.
We need to bear in mind that it is still early days in the development of multiple-camera smartphones, so we can expect to see rapid progress in the technology and the features it enables over the coming years.
Telephoto and zoom are becoming a fascinating blend of physical development (multi cameras, etc.) and computational photography.
Until recently optical zoom was fairly rare and the majority of smartphones made use of limited quality digital zoom. This is now changing very fast. The physics of lens design made it very complex to fit a zoom lens into the thin body of a high-end smartphone. So over the last two years, almost all flagship phones have moved to a dual-camera-and-sensor design for their rear-facing camera instead of trying to add an optical zoom.
Most make use of a traditional camera module paired with a 2x telephoto module, although 3x and 5x models are beginning to appear.
Images from the two camera modules can be blended to create improved results, but this presents some unique computational challenges. For example, the preview image displayed to the user is taken from one camera module or the other but needs to switch smoothly between the two modules as the user zooms in or out.
For that to happen, the images from the two modules need similar exposure, focus distance, and white balance.
Because the two modules are slightly offset from each other, the preview needs some realignment to minimize the shift in the image when the phone switches from one camera module to the other. Using a dedicated zoom lens on a standalone camera is obviously best, but the results coming from such a small smartphone can be amazing.
It is the advances made in computational photography that have contributed to the growing success of multi- cameras in smartphones.
Something more complex is, of course, portrait mode and its artificial background blur that is becoming more and more common. The software analyzing the data stream has to interpret what parts of the image belong to a particular physical object and the exact contours of that object. This can be derived from motion in the stream and from stereo separation in multiple cameras, and lots and lots of built-in experimentation.
These techniques are only possible because the needed imagery has been captured from the stream and because phone manufacturers have spent time developing the fast algorithms needed to perform these calculations. This is not a simple job, as it needs vast amounts of computational time for these algorithms to evolve.
This is where competition is really heating up between the manufacturers, both phone and component. And the major benefactors of all of this are you and me as we find that our smartphone cameras just get better and better.
When you next pick up your smartphone to capture an image remember that this amazing camera in your pocket is the end result of some incredible technology and hard work!
About the Author:
Roger Lee is a Johannesburg based photographic trainer and cruise ship speaker on photography. He runs a successful one day “Enjoy Your Camera” course, and his popular ebooks for people who don’t want to drown in detail are at www.camerabasics.net. His new smartphone photography ebook is at www.smartphone.org.za.
Like This Article?
Don't Miss The Next One!
Join over 100,000 photographers of all experience levels who receive our free photography tips and articles to stay current: