Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

This is it Step backa weekly newsletter covering one essential story from the world of technology. To learn more about smartphones and digital photos – real or otherwise – follow along Allison Johnson. Stepback arrives in our subscribers’ inboxes at 8 a.m. ET. Subscribe to Step back here.
Do you remember the early days of AI image creation? Oh, how we laughed when our claims turned up people with lots of fingers, rubber limbs, and other details that easily indicated fakes. But if you haven’t kept up, I’m sorry to report that the joke is off. AI image generators are getting better at creating photorealistic fakes, thanks in part to a surprising new development: slightly improved image quality. worst.
If you can believe it, OpenAI first launched its DALL-E image generator just under five years ago. In its first version, it could only create images with a size of 256 x 256 pixels; Little thumbnails, basically. A year later, DALL-E 2 debuted as a huge leap forward. The images were 1024 x 1024 and looked surprisingly real. But there was always something to tell.
In Casey Newton Hands-on training on DALL-E 2 Immediately after launching in beta, he included an image taken from his post: “Shiba Inu dog dressed as a firefighter.” It’s not bad, and it might fool you if you see it at a glance. But the contours of the dog’s fur are fuzzy, the patch on his (nice little) coat is just a meaningless scribble, and there’s a weird, chunky collar tag hanging on the side of the dog’s neck that doesn’t belong there. The cinnamon rolls with eyes from the same article were easier to believe.
Midjourney and Stable Diffusion also came to prominence around this time, embraced by AI artists and people who had, uh, Less tasteful designs. New and better models appeared over the next two years, reducing defects and adding the ability to display text somewhat more accurately. But most AI-generated images still have a certain look: A little smooth and perfectwith a kind of glow that’s more relatable to a portrait than a candid photo. Some AI images still look that way, but there’s a new trend toward it actual Realism that softens the shine.
OpenAI is a relative newcomer in the world of technology when compared to the likes of Google and Meta, but those established companies have not stood still with the rise of AI. In the latter half of 2025, Google released a new photo template in its Gemini app called Nano Banana. It went viral when people started using it to make realistic statues of themselves. My colleague Robert Hart I tried to get out of this trend And I noticed something interesting: the pattern Maintain his true image More faithfully than other AI tools.
That’s the thing about AI images: they are They often lean toward a neutral, gentle middle ground. Your request for a table image will look fundamentally correct, but it will also look like the result of a computer averaging every table it has ever seen into something that lacks any actual character. The things that make a table photo look real — or an exact replica of your facial features — are actually blemishes. I don’t mean weird AI artifacts trying to understand the alphabet. I mean a little messy, cluttered, and less than ideal lighting. More recently, this has also meant mimicking the flaws of our most popular cameras.
Google updated its photo model less than a month ago, calling the Nano Banana Pro the most advanced and realistic model yet. It’s able to tap into real-life knowledge and present text better, but the thing I find most interesting is that it often mimics the look of a photo taken with a phone’s camera. Contrast (or lack thereof), perspective, sharp clarity, exposure options – many of the images this model created for me bear the hallmarks of phone camera systems.
Whether you’re aware of it or not, you’re probably in tune with this outlook as well. The small sensors and lenses in our phones use multi-frame processing to overcome their limitations compared to a larger camera, and these images are optimized for viewing on a smaller screen. Altogether, this means that phone photos have a certain “look” compared to the artistic representation of the scene, enhancing shadows to reveal more detail and increasing sharpness to highlight subjects. Clearly, Google’s image generator has caught on to this pattern as well.
Google isn’t alone in offering a more realistic look to generated images. Adobe’s Firefly image builder has a control called “Visual Intensity” that lets you tone down the look of the glowing AI. The results look less grainy and more like they were captured on a real camera — perhaps more of a professional camera than a phone camera, which makes sense given Adobe’s target audience of professionals. But even the Meta’s AI generator has a slider for “Style,” which adjusts realism up or down accordingly. Elsewhere, video creation tools like OpenAI’s Sora 2 and Google’s Veo 3 have been used to create viral clips that mimic grainy, low-resolution images of security cameras. When AI has to be as good as security cameras, it can be very compelling.
There are plenty of good reasons to treat claims about AI’s unlimited potential for improvement with skepticism. Artificial intelligence agents Still struggling to buy a pair of shoes for you. But photography models? They have Pretty much Improved, the evidence is before our eyes.
I recently spoke with Ben Sandowski, co-founder of the popular iPhone camera app Halide, about the recent trend of smartphone imitation using AI. By embracing the powerful processing tendencies and familiarity with phone camera images, which make our photos appear slightly divorced from reality, “Google may have transcended the uncanny valley,” he says. The AI doesn’t have to make the scene look realistic – in a way, that’s a dead giveaway. All he has to do is imitate the way we record reality, with all its imperfections, and use it as a kind of cheat code to make the image seem believable. How do we believe any image we see?
there Sam Altman’s point of viewReal images and AI images will be combined together in the future, and we will be fine with that. I think he’s partly right, but I find it hard to believe that we wouldn’t really care about what’s real and what’s not. In order to solve both matters on our own, we’ll need some help. It appears to be on the way, but it is not coming as quickly as AI image models are improving.
the C2PA Content Credential Standard Gaining some much needed momentum. On Google Pixel 10 series phones, all An image captured by a camera receives a cryptographic signature that specifies how it was created. This avoids the “implicit truth effect,” said Pixel camera chief Isaac Reynolds He explained to me earlier this year. If you just label AI-generated images as AI, we assume that everything without a label is real. In fact, the lack of a poster just means we don’t know where the image came from. So the Pixel Camera labels both AI-powered and non-AI-powered photos alike.
Labels are all well and good, but they’re no use if you can’t see them. This is starting to change, and earlier this year, Google Photos added support for displaying content credentials. The company will also make it easier for content credentials to be displayed in search results and ads where they exist. That last part is key, though — currently, most photos taken with today’s phone cameras aren’t assigned credentials. For the system to work, device makers need to adopt a standard so that images are either tagged as AI or not when they are created. The platforms where images are shared need to participate as well. Until that happens, we’re on our own – and this is a better time than ever to not trust anything you see.