When you take a picture on your smartphone, the camera takes a picture of the scene. The software then processes that picture to create a 3D image. This is what we call a “smartphone photo.” But what if you want to make a photo that looks like it was taken with an old camera? You can do this by using computational photography. Computational photography is when you use algorithms to create images that look like they were taken with an old camera. This is how you can make photos that look like they were taken with an old camera. You can do this by using algorithms to create images that look like they were taken with an old camera. One way to do this is by using masks. A mask is just a piece of data that tells the computer how to make an image. You can use masks to make pictures look like they were taken with an old camera. Another way to do computational photography is by using textures. A texture is just a piece of data that tells the computer how to make an image. You can use textures to make pictures look like they were taken with an old camera. But there are other ways too, and those are the ways we will be looking at in this article. But first, let’s take a closer look at why computational photography helps improve smartphone photos."
Using Software to Improve Your Smartphone Camera
Computational photography is a broad term for loads of different techniques that use software to enhance or extend the capabilities of a digital camera. Crucially, computational photography starts with a photo and ends with something that still looks like a photo (even if it could never be taken with a regular camera.)
How Traditional Photography Works
Before going any deeper, let’s quickly go over what happens when you take a photo with an old film camera. Something like the SLR you (or your parents) used back in the ’80s.
When you click the shutter-release button, the shutter opens for a fraction of a second and lets light hit the film. All the light is focused by a physical lens that determines how everything in the photo will look. To zoom in on faraway birds, you use a telephoto lens with a long focal length, while for wide angle shots of a whole landscape, you go with something with a much shorter focal length. Similarly, the aperture of the lens controls the depth of field, or how much of the image is in focus. As the light hits the film, it exposes the photosensitive compounds, changing their chemical composition. The image is basically etched onto the film stock.
What all that means is, the physical properties of the equipment you’re using control everything about the image you take. Once made, an image can’t be updated or changed.
Computational photography adds some extra steps to the process, and as such, only works with digital cameras. As well as capturing the optically determined scene, digital sensors can record additional data, like what the color and intensity of the light hitting the sensor were. Multiple photos can be taken at the same time, with different exposure levels to capture more information from the scene. Additional sensors can record how far away the subject and the background were. And a computer can use all of that extra information to do something to the image.
While some DSLRs and mirrorless cameras have basic computational photography features built-in, the real stars of the show are smartphones. Google and Apple, in particular, have been using software to extend the capabilities of the small, physically constrained cameras in their devices. For example, take a look at the iPhone’s Deep Fusion Camera feature.
What Kind of Things Can Computational Photography Do?
So far, we’ve been talking about capabilities and generalities. Now, though, let’s look at some concrete examples of the kind of things computational photography enables.
Portrait Mode
Portrait mode is one of the big successes of computational photography. The small lenses in smartphone cameras are physically unable to take classic portraits with a blurry background. However, by using a depth sensor (or machine-learning algorithms), they can identify the subject and the background of your image and selectively blur the background, giving you something that looks a lot like a classic portrait.
It’s a perfect example of how computational photography starts with a photo and ends with something that looks like a photo, but by using software, creates something that the physical camera couldn’t.
Take Better Photos in the Dark
Taking photos in the dark is difficult with a traditional digital camera; there’s just not a lot of light to work with, so you have to make compromises. Smartphones, however, can do better with computational photography.
By taking multiple photos with different exposure levels and blending them together, smartphones are able to pull more details out of the shadows and get a better final result than any single image would give—especially with the tiny sensors in smartphones.
This technique, called Night Sight by Google, Night Mode by Apple, and something similar by other manufacturers, isn’t without tradeoffs. It can take a few seconds to capture the multiple exposures. For the best results, you have to hold your smartphone steady between them—but it does make it possible to take photos in the dark.
Better Expose Photos in Tricky Lighting Situations
Blending multiple images doesn’t just make for better photos when it’s dark out; it can work in a lot of other challenging situations as well. HDR or High Dynamic Range photography has been around for a while and can be done manually with DSLR images, but it’s now the default and automatic in the latest iPhones and Google Pixel phones. (Apple calls it Smart HDR, while Google calls it HDR+.)
HDR, however it’s called, works by combining photos that prioritize the highlights with photos that prioritize the shadows, and then evening out any discrepancies. HDR images used to be over-saturated and almost cartoonish, but the processes have gotten a lot better. They can still look slightly off, but for the most part, smartphones do a great job of using HDR to overcome their digital sensors’ limited dynamic range.
And a Whole Lot More
Those are just a few of the more demandingly computational features built into modern smartphones. There are loads more features they have to offer, like inserting augmented reality elements into your compositions, automatically editing photos for you, taking long-exposure images, combining multiple frames to improve the depth of field of the final photo, and even offering the humble panorama mode that also relies on some software-assists to work.
Computational Photography: You Can’t Avoid It
Normally, with an article like this, we’d end things by suggesting ways that you could take computational photographs, or by recommending that you play around with the ideas yourself. However, as should be pretty clear from the examples above, if you own a smartphone, you can’t avoid computational photography. Every single photo that you take with a modern smartphone undergoes some kind of computational process automatically.
Over the next few years, smartphone cameras are going to continue to become more capable as machine-learning algorithms get better and ideas move from research labs to consumer tech.