Watching this week’s Apple event gave me a sense of deja vu. With every new feature the iPhone maker announced, I felt like shouting something along the lines of “The Simpsons already did it!” It felt as if everything Apple was doing was a riff on something another company tried and tested before. Sure, Apple might be taking what others did and (possibly) making it better (maybe). But the company is also letting others take risks and innovate in its place, particularly when it comes to photography — an area where it used to shine.
Take for example the new “Deep Fusion” computational photography feature that Phil Schiller described as “way cool.” It’s an image processing system that taps the A13 Bionic’s neural engine and uses machine learning. According to Apple, this system will “do pixel-by-pixel processing of photos, optimizing for texture, details and noise in every part of the photo.” Deep Fusion will be available later this fall, so we don’t know yet how effective it might be. Apple did show sample shots of its Night Mode tool that will improve low-light photography, and those results did look impressive.
That latter feature is the most obvious example of Apple’s attempts to outdo its competitors. If you recall, Google’s Night Sight launched last November and made it possible to take relatively clear photos in near total darkness. And, Google wasn’t even the first to try this, it was just the most effective. Huawei, LG and Samsung have all offered their own takes on the feature in previous flagship phones to varying degrees of success. Apple’s Night mode promises to do pretty much the same, though how well it works remains to be seen.