Taylor Davidson · Software is Eating the Camera
Originally posted on Medium.
I grew up around photography, rolling black and white film into film cartridges, developing film and exposing prints in the darkroom, foul chemicals and rough timing and dim lights and prints hanging to dry. It’s a nostalgic experience that’s been replaced by Photoshop and the laptop and the mobile phone, the sensations and uncertainties of yesterday replaced with the immediacy of today. Exposing, printing, framing, and hanging, now capturing, filtering, and sharing.
From silver-plated sheets of copper to the Brownie to 35 mm film to digital processing and memory cards, the substance and shape of images has changed over the years, yet the form of the camera remained relatively unchanged until the rise of the smartphone and the smartphone camera. The smartphone turned a “camera” from a piece of hardware into a software application. The “camera” of today is one software application of many running on a device’s operating system, leveraging a couple of the many sensors embedded in the device.
There’s a long runway ahead of us in improving the quality of the images we take, with continued innovations in software and hardware coming down the research and commercialization pipeline. Continued innovation in lenses, image sensors, processors and computer hardware will take the camera into new forms, shapes, and places in the future (clip a camera on your clothing, wear it around your neck, put it on your bookshelf, etc.). And if any little box can become a camera, what will we do with them?
But today, the most innovative things in “cameras” is happening in software. Computational photography is creating massive advancements in our ability to visually capture and represent our world. In its most obvious forms, it’s making our images better: crisper, clearer, less blurry and grainy, with sharper depths of field, wider ranges of view, larger dynamic ranges, and better performance in low light, motion shake, and many other environmental conditions that have always degraded image quality and photographic opportunities. Digital cameras can capture parts of life (motion, in dark, in high contrast, etc.) that was impossible to capture before. Remember sepia toning, or HDR, or creating panoramas? From a darkroom to Photoshop to an option on an app on a phone.
The next step of computational photography will move us from taking better pictures to making different pictures. Visually, we’re beginning to see this with the Lytro and light-field photography, where the ability to capture and render multiple depths-of-field in a single photographic artifact is changing the meaning, no, the opportunity of a photograph. Once photos are digital bits, the technical options for photos expand to whatever we can do with bits (store, share, combine them, etc.), and farther down the line are visual and artistic interpretations that the artists of tomorrow will create.
But returning to the thought: what happens when the camera becomes an app? When you reduce the camera to one app of many and one sensor of many, connected to all those other apps and sensors, you start creating really interesting ways to change the substance of its images. For example, today’s iPhone has sensors to detect moisture, ambient light, proximity, motion (the accelerometer), and orientation (the gyroscope), and maybe soon, atmosphere sensors. Paired with connectivity technology (cellular, WiFi, Bluetooth, iBeacon, NFC, etc.) and access to a network of information, the “camera” of today isn’t just an image sensor and a lens, but the combination of all these sensors and apps connected through constantly evolving operating systems. We’ve started to use these technologies to add contextual and structured data to photos, at time of capture or after: locations, faces, scenes, for example. But what happens when we use ambient information and other apps as inputs to the photographic process? The image sensor isn’t the only sensor that the camera of tomorrow will use. [1]
In 2011 Marc Andreessen wrote that “software is eating the world”, noting and prophesying that software was disrupting and transforming industries. Software has fundamentally altered our lives, and perhaps nowhere is this more obvious than in photography, where we see and experience the impact of imaging software innovations every time we take, see, and share a photograph. In 1999 people around the world took about 80 billion photos; and in 2014, people will share over 1 trillion photos (and the number of photos we take will be even higher).
That behavior shift has mirrored massive economic and financial shifts in the industry, and that’s the point of Andreessen’s observation and his investing activity. As software software disrupts and transforms value chains, significant economic gains accrue to the disruptors and transformers. That transition is occurring in the photography industry as well.
But the impact isn’t just financial: it’s cultural, artistic, and personal. The meaning of a photograph has changed alongside the behavior change: photos have become a form of communication, and the unending flow of imagery has changed everything about how we interpret and value photographs. Photos don’t have to be “art” to be good, but they do have to be relevant to our lives and speak to the opportunities of the moment. And powered by innovations in imaging software and “camera” hardware, the opportunity for photography has never been brighter.
Yes, digital cameras use more sensors than just image sensors, to enable image stabilization and more. But the sensors and connectivity of traditional cameras still pale in comparison to smartphones. ↩︎