Gemini’s AI photo detector is only scratching the surface. This is not good enough


When you ask an AI to look in the mirror, it doesn’t always see itself. This is the feeling you get when you ask it to determine whether an image is original or generated by artificial intelligence.

Last week, Google tried to help us distinguish between real and fake, albeit a very limited attempt. In the Gemini app, you can share a photo and ask if it’s real, and Gemini will do it Check for SynthID –A Digital watermark – To tell you whether they were created by Google AI tools or not. (On the other hand, Google also launched last week Nano Banana Proits new image model, making it more difficult to detect a fake image with the naked eye.)

Atlas of Artificial Intelligence

Within this limited scope, Google’s reality check works well. Gemini works fast and will tell you if something has been done by Google’s AI. In my testing, I even made a screenshot of an image. The answer is quick and straightforward – yes, this photo, or at least more than half of it, is fake.

But ask him about the image made by literally everyone else Image generator You won’t get that definitive answer. What you get is an evidence review: the model looks for all the typical signs that something is fake. In this case, it basically does what we do with our eyes, but we still can’t completely trust its results.

Although Google’s SynthID check is reliable and necessary, asking a chatbot to evaluate something that lacks a watermark is almost worthless. Google has provided a useful tool for verifying the source of an image, but if we want to trust our eyes on the internet again, every AI interface we use must be able to verify images from every type of AI model.

Hopefully, we’ll soon be able to drop an image into a Google search, for example, and see if it’s fake. Deepfakes have become too good to be reality checked.

Verifying photos with chatbots is a mixed bag

There’s not much to say about Google SynthID scanning. When you ask Gemini (in the app) to rate a Google-generated photo, it knows what you’re looking at. It works. I would like it to be rolled out in all the places Gemini appears – like the browser version and Google search – and according to Google Blog post about the featureThis is already underway.

The fact that Gemini Browser doesn’t have this functionality yet means we can see how the model (without SynthID) itself responds when asked if the AI-generated image is real. I asked Gemini Browser Edition to evaluate the infographic that Google provided to journalists as a handout showing the new Nano Banana Pro model in action. This was generated by AI – this is stated in its metadata. Use Gemini’s SynthID app to analyze it. Gemini’s software in the browser was weak: it was said that the design could be from AI or from a human designer. She even said that her SynthID tool didn’t find anything that pointed to artificial intelligence. (Although when I asked him to try again, he said he encountered an error in the tool.) Bottom line? Can’t tell.

What about other chatbots? I asked Nano Banana Pro to create an image of a cat in a tux lying on a Monopoly board. The image was, at first glance, reasonably realistic. I sent it to unsuspecting co-workers so they would think it was my cat. But if you look closely, you’ll see errors: for example, the Monopoly set doesn’t make any sense – Park Place is in several wrong places and the colors are off.

AI-generated image of a black and white cat lying on a Monopoly board in the living room.

This is not a real cat or a real Monopoly game board. The image was generated by Google’s Nano Banana Pro AI image model.

Created by John Reed using Gemini AI

I’ve asked a variety of chatbots and AI-powered models whether the image was AI-generated and the answers have been all over the place.

The Gemini on my phone detected this immediately using the SynthID checker. Gemini 3, the high-level thinking model released this week, provided a detailed analysis of why artificial intelligence was created. The Gemini 2.5 Flash (the default model you get by selecting “Fast”) guessed it was a real photo based on the level of detail and realism. I tried ChatGPT twice on two different days and it gave me two different answers, one with a comprehensive explanation of how it’s clearly real, and the other with an equally long thesis on why it’s fake. It looks real,” said Claude, using the Haiku 4.5 and Sonnet 4.5 models.

When I tested images generated by non-Google AI tools, the chatbots made their evaluations based on the quality of generation. Images with sharper information — for example, mismatched lighting and poorly rendered text — were more reliably detected as AI. But the topic was contradictory. Really, it couldn’t have been more accurate than just giving it a deep critical look with my own eyes. This is not good enough.

The future of artificial intelligence detection

Google’s latest tool charts a possible path forward, even if it goes further. Yes, one solution to the growing problem of deepfakes is the ability to verify the image in a chatbot app. But it needs to work for more photos and more applications.

It shouldn’t take special knowledge to spot counterfeits. You shouldn’t find it Detailed applicationOr analyze metadata or find out errors that may indicate an image generated by artificial intelligence. As we have seen from the dramatic improvement in image and video models in the past few months, these statements may be foolproof today and useless tomorrow.

Read more: Google’s Nano Banana Pro creates ultra-realistic images using artificial intelligence. It scares the hell out of me

If you run an image online and have doubts about it, you should be able to go to Gemini, Google Search, ChatGPT, Claude, or any tool of your choice, and have it scan for a global, hard-to-remove digital watermark. Work to achieve this is done through: Alliance for content provenance and authenticationOr C2PA. The result should be something that is easy for ordinary people to verify without the need for a special application or expertise. It should be available in something you use every day. And when you ask AI, it should know where to look.

We don’t have to guess what is real and what is not. AI companies have a responsibility to give us a comprehensive and foolproof reality check. Maybe this is the way forward.



Leave a Reply

Your email address will not be published. Required fields are marked *