Deepfakes are everywhere: How to spot AI-generated videos


Created by artificial intelligence Videos are more popular than ever. From cute animal videos to out of this world content, these videos have invaded social media and are becoming more real by the day. Although it would have been easy to spot a “fake” video a year ago, these AI tools have now become sophisticated enough to fool millions of people.

New AI tools, incl OpenAI Sora, Google I see 3 and Nano bananaIt has erased the line between reality and the illusions generated by artificial intelligence. And now we are swimming in a sea of… Created by artificial intelligence Deepfakes and videos, from fake celebrity endorsements to… False disaster Broadcast.

If you’re struggling to separate reality from AI, you’re not alone. Here are some helpful tips that will help you cut through the hype to get to the bottom of every AI-inspired creation. For more, check out problem behind Power requirements for video AI and what we need to do in 2026 Avoid further AI regression.


Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.


Why are Sora AI videos so hard to detect?

Technically, Sora’s videos are impressive compared to competitors like Mid-flight V1 and Google I see 3. They have high precision, synchronized sound and amazing creativity. Sora’s most popular feature, called “cameo,” lets you use other people’s photos and insert them into almost any AI-generated scene. It’s an impressive tool, creating eerily realistic videos.

Sora joins the likes of Google’s Veo 3, another technically impressive AI-powered video creator. These are two of the most common tools, but certainly not the only ones. Generative media is becoming an area of ​​focus for many major technology companies in 2025, as image and video modeling is poised to give every company the advantage it desires in the race to develop the most advanced AI in all modalities. Google and OpenAI Released both major photo and video models this year in an apparent attempt to They outdo each other.

That’s why many experts are concerned about Sora and other AI-based video generators. Sora makes it easy for anyone to create realistic-looking videos that highlight their users. Public figures and celebrities are particularly vulnerable to these deepfakes, and unions like SAG-AFTRA have pushed OpenAI to boost them. Its barriers. Other AI video generators present similar risks, along with concerns about filling the Internet with meaningless AI information and could be a dangerous tool for spreading misinformation.

Defining AI content is an ongoing challenge for tech companies, social media platforms, and everyone else. But it’s not completely hopeless. Here are some things to look for to determine if a video was produced with Sora.

Look for the Sora watermark

Every video produced on the Sora iOS app includes a watermark when downloaded. It’s the white Sora logo – the cloud icon – that bounces around the edges of the video. It’s similar to the way TikTok videos are watermarked. Watermarking content is one of the biggest ways AI companies can help us visually discover AI-generated content. Google Gemini Nano banana The model automatically places watermarks on its images. Watermarks are great because they serve as a clear sign that the content was created with the help of artificial intelligence.

Atlas of Artificial Intelligence

But watermarks aren’t perfect. First, if the watermark is static (not moving), it can be easily cropped. Even for animated watermarks like Sora’s, there are apps specifically designed to remove them, so watermarks alone can’t be completely trusted. When OpenAI CEO Sam Altman Asked about thisHe said society will have to adapt to a world where anyone can create fake videos of anyone. Of course, before Sora, there was no popular, easily accessible, skill-less way to create these videos. But his argument raises a valid point about the need to rely on other methods for validation.

Check metadata

I know you’re probably thinking that it’s impossible to check the metadata of a video to determine if it’s real. I understand where you are coming from. It’s an extra step, and you may not know where to start. But it’s a great way to determine if a video was produced with Sora, and it’s easier to do than you think.

Metadata is a set of information that is automatically attached to a piece of content when it is created. It gives you more knowledge about how to create the photo or video. It can include the type of camera used to take the photo, the location, the date and time the video was taken, and the file name. Every photo or video contains metadata, regardless of whether it is human-made or AI-made. A lot of AI-generated content will have content credentials that point to the AI ​​origins as well.

OpenAI is part of the Alliance for Content Source and Authenticity, which stands for Sora Includes videos C2PA metadata. You can use Verification tool From the Content Authenticity Initiative to verify the metadata of a video, image or document. Here’s how. (The Content Authenticity Initiative is part of C2PA.)

How to check the metadata of an image, video, or document

1. Go to this URL: https://verify.contentauthenticity.org/
2. Upload the file you want to check. Then click Open.
4. Check the information in the right panel. If generated by AI, this should be included in the content summary section.

When you play a Sora video through this tool, it will show that the video is “released by OpenAI,” and will include the fact that it was created by AI. All Sora videos must have these credentials that allow you to confirm that they were created with Sora.

This tool, like all AI detectors, is not perfect. There are several ways in which AI videos can avoid detection. If you have non-Sora videos, they may not have the necessary flags in the tool’s metadata to determine whether they were AI-generated. For example, AI videos created with Midjourney are not flagged, as I confirmed in my testing. Even if the video was created by Sora, but then played through a third-party app (such as a watermark removal app) and re-downloaded, that makes it unlikely the tool will identify it as AI.

Content documentation tool

The Content Authentication Initiative verification tool correctly flagged the video you created with Sora as AI-generated along with the date and time you created it.

Qadrouni/CNE

Search for other AI classifications and include your own

If you use one of Meta’s social media platforms, like Instagram or Facebook, you may get a little help determining if something is AI-based. Meta has Internal systems It is in place to help tag and categorize AI content as such. These systems are not perfect, but you can clearly see the rating of flagged posts. Tik Tok and YouTube It has similar policies for AI content classification.

The only reliable way to know if something was created by AI is for the creator to reveal it. Many social media platforms now offer settings that allow users to label their posts as generated by artificial intelligence. Even a simple credit or disclosure in your caption can go a long way to help everyone understand how you created something.

You know as you swipe at Sora that nothing is real. However, once you leave the app and share your AI-generated videos, it becomes our collective responsibility to reveal how the video was created. As AI models like Sora continue to blur the line between reality and AI, it’s up to all of us to make it as clear as possible when something is real or AI.

Most importantly, stay vigilant

There is no single foolproof way to accurately know at a glance whether a video is real or AI-powered. The best thing you can do to prevent yourself from getting scammed is to not automatically and unquestioningly believe everything you see online. Follow your intuition, and if something seems unrealistic, it probably is. In these unprecedented AI-filled times, your best defense is to closely examine the videos you watch. Don’t just glance and scroll away mindlessly. Check out distorted text, disappearing objects, and physics-defying animations. And don’t blame yourself if you get cheated from time to time. Even experts make mistakes.

(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’s copyrights in training and operating its AI systems.)



Leave a Reply

Your email address will not be published. Required fields are marked *