I recently came across this interesting video on a quasi-scandal involving Samsung smartphone cameras taking better pictures of the moon than the physical camera elements actually allows:
The video brings up an interesting question about what the pictures that smartphones take actually are. In the video Marques proposes that the images that the smartphone generates are something along the lines of an image which is what the smartphone “thinks” you want the slice of reality you were trying to capture to look like.
It’s no secret that smartphones these days do massive amounts of processing on the photos that they take and that this goes way beyond removing noise and compensating for camera shake; for years now they’ve been actively recognizing the subject in front of them and adjusting focus, faking bokeh (the way in which subjects behind the focal plane are blurred), punching up colors, adjusting contrast in only some parts of the picture, etc. etc. etc.
There is a problem with this when it comes to taking pictures of the moon, though, because there is only one moon, we only ever see one part of it because it’s tidally locked to the earth, and we’re so far away from it that there is effectively only one angle to take the pictures from. In short, except for haze in the atmosphere or objects in front, there’s only one picture you can take of the moon.
Using AI to improve pictures of the moon is thus not easily distinguished from just replacing your picture with a better picture of the moon. It is different; the approach Samsung uses preserves whatever color in the moon you see due to haze in the atmosphere (a honey moon, a red moon, etc) and won’t override a cloud or bird in front of the moon when you take the picture. But if you’re not capturing weird lighting or something in front of the moon, a cleared-up version of your picture of the moon isn’t really different from just using a better picture instead.
Smartphones have been clearing up the pictures that they take for a long time now, and for the most part people don’t really object. (Every now and then when posting pictures of my superdwarf reticulated python to Instagram I have to note that the camera punched up the color, though it’s not a big deal because it’s what he looks like outdoors in sunlight, so it’s only a slight inaccuracy.)
It’s just weird that there happens to be a subject where you can only take one picture and so the AI image enhancement doesn’t need your original photo to present a clearer version of the photo you took. From what we can tell it does use your photo and doesn’t improve every photo of the moon to a pixel-perfect photo of the moon, but in some sense that’s just an implementation detail and imperfect photo enhancement, respectively.
Of course, the same thing that makes this a problem makes it purely academic; there’s no important reason to take photos of the moon because at best they look exactly like photos you can easily look up. And if you’re doing it for fun, you’re going to use a real camera not a smartphone camera.
It is an interesting academic problem, though.