Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

The invisible watermarks that Google is betting on will be just as good as the visible ones. The company continues its week Gemini news 3 With the announcement that it will be bringing its AI content detector, SynthID Detector, out of private beta for everyone to use.
This news comes in conjunction with the release Nano Banana ProGoogle’s popular AI photo editor. The new Pro model comes with plenty of upgrades, including the ability to create crisp text and upscale your photos to 4K. This is great for creators using AI, but it also means that it will be more difficult than ever to identify AI-generated content.
We had it Deep fakes Long before generative AI. But AI tools, like those developed by Google and OpenAI, allow anyone to create fake and convincing content faster and cheaper than ever before. This has led to a massive influx of AI content online, everything from Low-quality AI regression To deepfakes that look realistic. Viral AI video app from OpenAI, Sorawas another key tool that showed us how easy these AI tools are It can be abused. It’s not a new problem, but artificial intelligence has dramatically escalated the deepfakes crisis.
Read more: Slop AI has turned social media into an antisocial wasteland
That’s why SynthID was created. Google introduced SynthID in 2023, and every AI model it has released since then has attached these invisible watermarks to AI content. Google adds a small, visible glitter watermark as well, but it doesn’t really help when you’re quickly scrolling through your social media feed and not aggressively analyzing each post. To help prevent the deepfake crisis (which the company helped create) from getting worse, Google is introducing a new tool to use to identify AI content.
You can now ask Gemini if the photo was created using artificial intelligence. But it will only recognize if it is made using Google’s AI.
SynthID Detector does exactly what its name suggests; It analyzes images and can capture the invisible SynthID watermark. So, in theory, you could upload a photo to Gemini and ask the chatbot if it was created using AI. But there’s a big catch – Gemini can only confirm if a photo was taken using Google’s own AI, not any other company’s. Since there are so many AI sample photos and videos available, this means that Gemini likely won’t be able to tell you if they were AI-generated using non-Google software.
Right now, you can only ask about photos, but Google said in a blog post that it plans to expand the capabilities to include video and audio. No matter how limited, such tools are still a step in the right direction. There are a number of AI detection tools out there, but none of them are perfect. Generative media models are improving rapidly, sometimes too quickly for detection tools to keep up. That’s why it’s so important to classify any AI-powered content you share online and stay suspicious of any suspicious photos or videos you see in your feeds.
Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.
For more, check out It’s all in Gemini 3 and What’s new in Nano Banana Pro.