Taylor Davidson · The Future of Mobile Photography Apps

iOS 8, Android L, and the implications for mobile photography
by Taylor Davidson · 30 Jul 2014

About a year ago I wrote about the future of filters, focusing on what’s next in consumer photography products:

… I think we’ll soon see apps that productize the photographic eye and standardize composition.

Meaning, there are abundant options for people to edit, share, and view photos, but few options to help people take better pictures. Smartphone photo apps and cameras have largely neglected the “capture experience” to focus on the editing and sharing experiences [1]. Live view, capture grids, and exposure meters help us understand our compositions, but they still take a lot of work and understanding to maximize their potential. And that makes sense: the photographer’s eye, the ability to process a scene and see and compose a quality, interesting, picture, is a unique, artistic skill that can be difficult to teach and takes a lot of experience to master.

Of course, what’s hard is also the opportunity.

* * *

Editing (in the hobbyist sense of the word) is a hard skill to learn, but photo app developers have tackled this issue by creating preset filters and post-processing tools to help people process and improve their images. Instagram’s success stems from their ability to help make people’s pictures better: quality filters that change tones, colors, and moods, a standardized square crop to make composition simpler, and a limited range of editing options, combine to make easy for people to make and share better pictures.

Competing app deveopers caught on, and people now have a wide range of editing and sharing apps at their disposal. Even Instagram has added features and options to keep up with competitors (and to aid in their monetization plans). Images are everywhere.

In that context, it’s not surprising to see that Apple’s announcements of manual camera controls in iOS 8 would be greeted by a great deal of excitement by developers and users. The current full-auto iPhone camera app, with a limited set of options around HDR, video, panoramic, AE / AF lock, and nine preset filters, will soon offer near full manual control over the camera. Photographers will be able to change ISO, shutter speed, focus, white balance, exposure bias, EV bracketing, and shutter speed/ISO bracketing all at the time of capture.

But since the iPhone is already the world’s most popular camera, it begs the question: is this what people need? The iPhone’s success as a camera stems from many factors, many of which are about it being an Internet-connected, geo-tagged, community-connected camera, but the ease and simplicity of an all-auto, easy to use, touch screen experience has definitely been a big part of it’s success. Do people truly want more settings to learn how to use and more ways to fine-tune the image as it’s captured?

* * *

In a vacuum, yes, but few people live in a vacuum.

Professional photographers will rejoice at the granular manual controls and creative power in iOS 8. But the amateur / hobbyist photographer drives mobile photography today, and I’ll bet few people will ever consciously use the manual controls.

That doesn’t mean that that iOS 8’s innovations are lost: quite the opposite. Camera API, PhotoKit, and app extensions will unlock a wide range of capabilities for photo app developers, which will in turn could create new, better, and easier ways for people to take photos.

Camera API, or AVCaptureDevice API, allows developers to tap into the same manual control options being built into the stock iOS Camera app. PhotoKit allows developers to tap into the same functionalities as the stock Photos app, including iCloud Photos. And the more general introduction of app extensions, some of which are focused on photo editing, will allow users to access editing features and filters directly from a wider range of applications without having to succumb to the save-and-reopen workflow. Here’s an example: if users have the VSCO app installed they would be able to edit pictures using VSCO filters and tools directly in the camera app, without having to open it in VSCO directly. App developers still have to choose to create app extensions for their core apps, but the possibility to integrate their features into a broader range of user interaction points will be an attractive option to many.

* * *

Apple isn’t the only one innovating in this, course. Google recently announced Android L, which includes a wide range of new functionalities and APIs for mobile photography developers. Previously, Google’s stock Android camera app had a range of manual control options, but they weren’t exposed to third-party developers, so OEMs had to create their own Camera apps and APIs to provide to third-party developers. But Android L is changing that, and going a step further in building new features and options for third-party developers. In addition to the usual manual controls, it will be possible to do:

… burst shots, full resolution photos while capturing lower resolution video, and HDR video. In addition, because the pipeline gives all of the information on the camera state for each image, Lytro-style image refocusing is doable, as are depth maps for post-processing effects.

Impressive and exciting, certainly. Android L is a completely redesigned approach to imaging, and it will provide far more functionalities and controls for third-party developers to access and leverage.

But to compete and differentiate, app developers will have to more than simply expose the new manual options. A developer will have to use the manual controls to make the capture experience automatically better, easier, faster for the user, in a way that differentiates themselves from other apps on the market.

How could this play out for the user? We aren’t going to choose ISOs and aperture settings, we are going to tell an app what we’re taking a picture of or what we want to accomplish, and the app is going to figure it out automatically. We aren’t going to always think about new, creative ways to expose a scene: the app is going to figure it out and suggest it to us automatically. An app will read the subject of our composition and automatically create suggested or potential cop options to fit the rule of thirds or other composition guidelines. An app will tell us to crouch down, or move closer, or move 10 feet to the right to get a better picture. An app will read the subject and create motion blur, bokeh, double exposure, diptychs, and other creative ways to interpret our image. Apps will work to replace human instruction, education, and experience by automatically embedding the decision-making and interpretation into the mobile capture experience. [2]

* * *

Let’s think about the innovations more broadly and consider the implications of what Apple and Google are doing in mobile photography:

  • For an independent app developer, the time is now to innovate on the capture experience. Editing and filters are commodities. The smartphone OSs and OEMS (Apple, Google, Samsung, Nokia, and now Amazon) are building more functionality and features into the stock camera and photo apps. Focus on photography, not photos. Work with photographers to understand how images can be improved, and understand the creativity and thought process behind a good eye, and create something that magically takes better pictures for everyone.
  • For an existing photo app developer, the time is now to build or buy to move down the stack closer to the capture experience. The capture, editing, and sharing experiences are going to become more tightly integrated in the future, the opportunity is now.
  • For a camera manufacturer, beware. Smartphones have stolen share and software is quickly replicating the advantages of camera and lens hardware (at least to the 80-90% level). In the end, I’m betting on software for the masses, but hardware will still have a role for the last 10-20% of users and usecases. Plan accordingly.
  • For the professional photographer, automatic technology will make it harder to compete on a single, stand-alone image basis, because if 1.5 billion images are shared every day, it’s inevitable that there will be some accidental stunners in there. But technology won’t enable a crowd of hobbyists to replace professionals, as there will always be a role for professionals to create quality images in important, time-sensitive, access-restricted, creatively-determined, and one-time-only type situations. As always, know what you shoot and what makes you different. [3]
  • For enterprise image users or enterprise imaging developers, note that the innovations in smartphone software aren’t just about consumer photography. Introducing better imaging software into smart devices opens up new opportunities for enterprise usecases. Scanning, 3D imaging, satellite imaging, aerial imaging, security imaging, and more are also impacted by mashing better imaging software with inexpensive, networked hardware.
  • For messaging apps that leverage photos as communication, I’m curious to see how you’ll use new imaging possibilities for communication usecases.
  • For a user, welcome to the future.

Many of these implications will take a lot of time to filter down into products, business models, and widespread adoption. And many of the creative applications of the greater imaging controls of iOS and Android will be hard to implement and accomplish. But like most platform innovations, what’s really exciting will be the applications of the technology that we haven’t thought of yet. It’s still early days in mobile photography.


  1. Thank you to Zack, who gets credit for the phrase “capture experience” and starting the discussion that led to this post. ↩︎

  2. Obviously, image recognition and automatic cropping and other functionalities aren’t available yet. But Google’s implementation of Lens Blur in the Google camera app and use of computer vision technologies and depth maps to determine foreground and background show it’s definitely possible. ↩︎

  3. Richard Kelly, Paul Melcher and I talked about the opportunities for professional photographers during an ASMP webinar in June, inspiration and credit for this point goes to Richard and Paul. ↩︎