Computational photography is a term that seems to be coming up more and more often lately in photography related conversations. If you don’t know what it means, it refers to mobile device photography – smart phones in particular, referring to how the app developers are now using programming code to create various effects like shallow depth of field (bokeh) that require knowledge and equipment to create in a camera.
In other words, you select the effect or mode then you just shoot and save the photo instead of choosing a lens, calculating exposure, taking the picture, then refining it in post production.
This seems to have a lot of people upset and they’re talking about the death of photography yet again. It seems that some of the online photography gurus think that somehow their digital camera is substantially different from, and far superior to a smart phone camera. Yes the phone has a much smaller sensor and less exposure control (as a rule), and doesn’t’ have interchangeable lenses, but that’s about it.
Here’s the truth – Most photography is computational, and it doesn’t matter in the least.
I’ll let that sink in for a minute.
You see, If you shoot film, there are some variables but, essentially you are exposing a chemically covered media to light which, after being treated with other chemicals, leaves a latent negative image that can then be printed onto light sensitive paper as a positive image. No computers required. In film’s purest form there is nothing digital or computational about the process, but very few people print traditionally in a dark room anymore. That is the only non-computational form of photography there is, or ever was.
So then if you shoot film and send it to a lab for processing, the lab will likely send you scans of your shots on maybe a disc or thumb drive. They might even upload your images as digital scans to a cloud based service and give you a password or PIN to access them. They may or may not send you your negatives. More and more often, they won’t.
Even film has been pushed (pun intended) into the digital era of computational photography.
This is one of the ironies of shooting film in the digital era – you start with film and, ultimately end up with a digital product that you then process on a computer before sending it to a printer (if you actually print photos instead of just posting them online).
If you shoot digital images instead of film, you are exposing a digital sensor to light. The camera’s internal software and processor then create an image. If you want to print it, you either use a home printer connected to your computer, or you send it to an outside printer (with better equipment) who will create a print from your digital file. Of course you can just view the image on the camera’s screen and call it a day. The process, however, relies on digital, computational technology from front to back.
What about camera RAW files? Well, sorry, it’s still computational. A RAW image isn’t really an image. It is essentially just a string of software code that tells the camera’s processor how to make an image. It just leaves out some of the corrections (and compression) that the camera is programmed to process into a JPEG, and every camera processes the information differently. This is why we get the various arguments about Canon’s color science vs Nikon’s vs Sony’s ad infinitum. Raw is not a photo until it is displayed on a screen.
Most people then use Lightroom or another Adobe product to post process and edit their RAW images – so you don’t really even get to see the real RAW data. You get Adobe’s interpretation of that data. That’s precisely what Adobe Camera RAW is for and Adobe RGB is their own, specific color interpretation.
There’s nothing wrong with Lightroom or Photoshop, I use both for virtually all of my digital post processing and I think they are well suited to the task, but I understand what they are and how to best use them for my personal purpose. (By the way, the same principal applies to any other brand of post processing or image editing software, it’s computational by its very nature.)
The good news is, it doesn’t matter. Really.
Photography has changed over the last hundred or so years by an order of magnitude. We have evolved from the Daguerrotype to the JPEG and from the large format View Camera to the Smart Phone. At each interval, the photographic community has condemned the changes as being the “death of photography” and criticized them as being “not really photography” yet, in retrospect, it didn’t matter and the technological progress actually advanced the art and science of photography. Photography has continued and advanced, and changed, and evolved, but it hasn’t died because of any new technology.
I don’t expect it to die any time soon either, especially just because the process is becoming even easier.
Thanks for reading.