There’s an interesting story right now about the latest Galaxy S23 and the moon. A person on Reddit made a blurry picture of the moon and then took a picture of that blurry image with his Galaxy S23, which used its particular, highly trained AI, to make it a clear and beautiful picture of the moon. That resulting picture wasn’t so much the same moon the photographer saw so much as it was an AI-generated picture of what the S23 computer brain expected the moon to look like in that particular photo.
I don’t really know how to feel about that. If I took a picture of my wife, would I want the picture of that lady that I love as seen through my lens in the moment or the idealized version of her the AI generates on the phone? That’s kind of a loaded question because, with all of the computational photography going on in all smartphones (iPhone included) you never really see exactly what the lens saw anymore. To me, the tipping point is where the image capture no longer matters. It appears the S23 is at that point when you shoot the moon.