Taylor Davidson · Beyond Better Images
The next step of computational photography will move us from taking better pictures to making different pictures. Visually, we’re beginning to see this with the Lytro and light-field photography, where the ability to capture and render multiple depths-of-field in a single photographic artifact is changing the meaning, no, the opportunity of a photograph. Once photos are digital bits, the technical options for photos expand to whatever we can do with bits (store, share, combine them, etc.), and farther down the line are visual and artistic interpretations that the artists of tomorrow will create.
I wrote that last year in a post called Software is Eating the Camera, and since then, I’ve continued to be amazed by how software is changing imagery, from understanding the content and context of images to making better images with simpler optics.
In the same vein of innovation, there has been a recent spate of news about startups combining optics with software to change how photos are made. Multi-lens arrays, which use multiple lenses and images sensors organized in an array to simultaneously take multiple pictures combined through software to create a single image, uses this mixture of hardware and software to emulate the qualities of a much bigger single lens system. Multiple companies are working in this area:
- Algolux raised $2.6 mm in venture funding last fall, and recently talked about what they are working on
- Apple purchased LinX for $20 mm, sparking hopes for improved image technologies in the next iPhone
- Light raised $9.7 mm last summer and recently talked about their multi-lens approach (and announced a license deal with Foxconn)
- Others like Corephotonics and Pelican are working to commercialize similar technologies [1]
The headline focus of these technologies are about improving the imaging capabilities of small devices, and it makes sense: any device manufacturer using image optics as a core part of their product is a prime target to integrate multi-lens systems in order to help them compete. Who doesn’t want SLR-quality photos from a tiny cellphone? [2]
But my interest in the technologies is different: how will they enable us to make better photos? To emphasize: not better images, but better photos.
Breakthroughs in photography come from photo quality, not image quality. I’ll define “photo quality” as all the aspects, emotions, contexts, and impacts captured in a photo, separating it from the technical qualities of the image itself (resolution, etc.).
The impact of digital imaging and networked imagery (i.e. devices connected to the network) on photography is apparent to all of us today, but these are merely the latest in a long line of changes in imaging technology that changed how we can use cameras to create photos:
- Film (i.e. flexible photographic roll film) allowed photographers to create photos with far more sensitivity to the full spectral range of colors and to take photos much faster than earlier photographic mediums like daguerreotypes (silver-plated copper sheets), wet-plate and dry-plate technologies (tintypes), and early glass plate-based film.
- 35 MM format film (i.e. 135 film) changed photography by allowing photographers to carry smaller, lighter cameras into the field without sacrificing image quality. Leica was the first to popularly commercialize (though not the first to produce) 135 format cameras, depending on high quality lenses to create sharp negatives so that the negatives could be enlarged to product larger photos.
- Early color film, even though it was more expensive and difficult to use with indoors lighting, gained in popularity for many reasons unrelated to image quality. Color photography gave photographers more notes to use in a photo, a change than many black-and-white photographers didn’t want to use or truly respect for many years, but it permanently changed how we interpret photos.
- Polaroid changed photography with instant photography, but it wasn’t because the images were better, it’s because the photos were better. Inexpensive access to the tools to turn moments into photos, reducing the time from experience to photograph to the network. [3]
Ubiquity (the “infinite lens”) [4] and connectivity have made photography into a language used by the masses. This sea change didn’t happen because the images were better, and perhaps not even because the photos were better, but because the contexts of the photos were better: more personal, more meaningful, more relevant, more immediate, more available, more connected. Photos as messaging, not as art. It’s the contexts that matter, not the images themselves:
When you reduce the camera to one app of many and one sensor of many, connected to all those other apps and sensors, you start creating really interesting ways to change the substance of its images. For example, today’s iPhone has sensors to detect moisture, ambient light, proximity, motion (the accelerometer), and orientation (the gyroscope), and maybe soon, atmosphere sensors. Paired with connectivity technology (cellular, WiFi, Bluetooth, iBeacon, NFC, etc.) and access to a network of information, the “camera” of today isn’t just an image sensor and a lens, but the combination of all these sensors and apps connected through constantly evolving operating systems. We’ve started to use these technologies to add contextual and structured data to photos, at time of capture or after: locations, faces, scenes, for example. But what happens when we use ambient information and other apps as inputs to the photographic process? The image sensor isn’t the only sensor that the camera of tomorrow will use. (link)
And the photo of tomorrow won’t just consist of the image, but everything around the image.
When I look at the innovations in optical hardware, yes, I get interested because who doesn’t want better images? It’s an immediate, visceral, quasi-geeky response by anyone interested in photography. “More megapixels? Yes, please,” is what we used to say. But absolute image quality is not what’s most interesting in photography today; what’s interesting is everything else about a photo and how we use them. Not view them, but use them.
To the extent that multi-lens array technologies enable us to make higher-quality images, I’m all for it as a consumer, although it’s probably not a life-changing or art-changing event. But if multi-array and other combinations of optics and software go beyond making better images and change how we make photos - how we can make a photograph, with what device, in what contexts (low light, harsh light), at what speeds, with what confidence in our abilities, with what contextual understanding - and not just make better images, then that’s what excites me about imaging technology. [5]
Interested in imagery and technology? Check out the LDV Vision Summit on May 19-20 in NYC
You can follow me on Instagram and EyeEm.
While Lytro’s technology is closely related, they are commercializing a different technology - multiple optics, single sensor, instead of multiple optics with multiple sensors - with a more singular value proposition (depth mapping), rather than the multi-lens systems’ broader focus on image quality, depth mapping, and low light performance. ↩︎
A separate question which I don’t want to talk about right now is whether these technologies make sense as companies. ↩︎
Yes, I’m using “network” to mean physically sharing photos. ↩︎
Everything Craig Mod writes about photography is beautiful and thought-provoking. Sigh. ↩︎
And to be fair, multi-array lens technologies could do more than just make higher-quality images. Better low light sensitivity enables us to take photos in more contexts. Depth mapping could add a depth to photography and create a new medium for expression. And there are a wide range of commercial applications in enterprise imagery contexts that could have a bigger impact than consumer applications. ↩︎