According to the classical definition, a photograph is an image formed by light falling on a light-sensitive surface, usually photographic film or an electronic image sensor. Light particles (called photons) provide the image of reality.
Of course, it has always been the case that you could have a better or worse camera, which determined how well the image was resolved. The optical lens also played a major role, deciding, for example, how sharply something could be imaged.
There has never really been such a thing as a reflection of reality, to be honest. Because through the angle of view and the composition alone, the photographer has always decided what he wanted to show and what he didn't want to show. Because an image is two-dimensional and reality has three spatial dimensions, every photo has always been only an approximation.
Stealthily, quietly, this has changed over the last decade and has actually been starkly different for the last three years or so. That's because most photos these days are created with smartphones. A smartphone has a ridiculously small lens, downright shameful compared to what you get on professional cameras. And the sensor is just as tiny.
Many smartphones have an image sensor size of only 0.315 inches. There are 1-inch sensors in the premium segment. But, the inch unit used is not the common inch measurement. On a phone, a 1-inch sensor measures only 13.2 x 8.8 millimeters. Strange, isn't it?
On a phone several pictures are taken even before the shutter release button is pressed to avoid camera shake. Different exposures are made and image areas are exposed differently.
Faces are most important for Smartphone cameras, so in general in todays's photography. Phones detect faces damn well. And then, they will expose faces evenly. Asian phones tend to expose faces even brighter than American ones, because a light complexion is more popular there. Take a smartphone photo of a poc today. The phone knows how to expose correctly! Only ten years ago, you could rest assure that faces of color were always underexposed.


The processing goes not only for faces. Also for complex light scenes, e.g. photographing into the sun. Or for night scenes where smartphones have huge capabilities. Or simple speaken, if a sky is detected, it will get more saturated and underexposed, so it won't blow out.
Today, a photo is created less by photons and more by processing. A photo is what the software calculates as an optimal image.
In modern cameras you can't switch off this HDR stuff. Sometimes you can reduce it. But it's becoming more and more the norm, and cell phones are producing photos that sometimes look awfully artificial, but often look so perfect that it's creepy.
If you want to have more control over the developing, i.e. the processing of the photo, you can shoot in RAW. Starting with the iPhone 12, you can select Apple ProRAW format for this. Alternatively, there are camera apps like Lightroom Mobile (for iPhone or Android), an app that can also shoot DNG RAW format.
All of what I'm describing doesn't just apply to smartphone photography. Strictly speaking, every digital camera also processes the image using an algorithm when a JPG is created. Sometimes I'm downright disappointed when I load my RAW files onto the computer and they look less spectacular than what the camera showed me on the display.