Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
If you’ve been online even a little bit, you’ve probably seen a photo or video that was there Created by artificial intelligence. I know I’ve been cheated on before, and it happened to me Viral video of rabbits on a trampoline. But Sora takes AI videos to a whole new level, making knowing how to spot AI more important than ever.
Sora is the sister app to ChatGPT, built by the same parent company, OpenAI. It’s named after its AI video generator, which launched in 2024. But it recently got a major overhaul with the new Sora 2 model, along with The all-new social media app With the same name. TikTok-like app It went viralwith AI enthusiasts determined to do so Chasing invitation codes. But it’s not like any other social media platform. Everything you see on Sora is fake. All videos are created by artificial intelligence. The use of Sora is An AI deepfake fever dream: Harmless at first glance, with serious dangers lurking beneath the surface.
Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.
Technically, Sora’s videos are impressive compared to competitors like Mid-flight V1 and Google I see 3. Sora’s videos feature high definition, synchronized sound, and amazing creativity. Sora’s most popular feature, called “cameo,” lets you use other people’s photos and insert them into almost any AI-generated scene. It’s an impressive tool, creating eerily realistic videos.
That’s why many experts are concerned about Sora, which could make it easier than ever for anyone to create deepfake content, spread misinformation, and blur the line between what’s real and what’s not real. Public figures and celebrities are particularly vulnerable to these potentially dangerous deepfakes, which is why unions like it SAG-AFTRA pushed OpenAI to strengthen its guardrails.
Defining AI content is an ongoing challenge for technology companies, social media platforms, and all of us who use them. But it’s not completely hopeless. Here are some things to look for to determine if a video was produced with Sora.
Every video produced on the Sora iOS app includes a watermark when downloaded. It’s the white Sora logo – the cloud icon – that bounces around the edges of the video. It’s similar to the way TikTok videos are watermarked.
Watermarking content is one of the biggest ways AI companies can help us visually discover AI-generated content. Google Gemini “Nano Banana” model.For example, it automatically places watermarks on its photos. Watermarks are great because they serve as a clear sign that the content was created with the help of artificial intelligence.
But watermarks aren’t perfect. First, if the watermark is static (not moving), it can be easily cropped. Even for transferring watermarks like Sora’s, there are apps specifically designed to remove them, so watermarks alone can’t be completely trusted. When Sam Altman, CEO of OpenAI, was asked about this, he said: He said Society will have to adapt to a world where anyone can create fake videos of anyone. Of course, before OpenAI’s Sora, there was no popular, accessible, skill-free way to create these videos. But his argument raises a valid point about the need to rely on other methods for validation.
I know you’re probably thinking that there’s no way you can check the metadata of a video to determine if it’s real. I understand where you’re coming from; It’s an extra step, and you may not know where to start. But it’s a great way to determine if a video was produced with Sora, and it’s easier to do than you think.
Metadata is a set of information that is automatically attached to a piece of content when it is created. It gives you more knowledge about how to create the photo or video. It can include the type of camera used to take the photo, the location, the date and time the video was taken, and the file name. Every photo or video contains metadata, regardless of whether it is human-made or AI-made. A lot of AI-generated content will have content credentials that point to the AI origins as well.
OpenAI is part of the Alliance for Content Source and Authenticity, which for you, means Sora videos Includes C2PA metadata. You can use Content Authenticity Initiative Verification Tool To check the metadata of a video, image, or document. Here’s how. (The Content Authenticity Initiative is part of C2PA.)
How to check the metadata of an image, video or document:
1. Go to this URL: https://verify.contentauthenticity.org/
2. Upload the file you want to check.
3. Click Open.
4. Check the information in the right panel. If generated by AI, this should be included in the content summary section.
When you play a Sora video through this tool, it will show that the video is “released by OpenAI,” and will include the fact that it was created by AI. All Sora videos must have these credentials that allow you to confirm that they were created with Sora.
This tool, like all AI detectors, is not perfect. There are several ways in which AI videos can avoid detection. If you have videos other than Sora, they may not have the necessary flags in the metadata for the tool to determine whether or not they were generated by AI. For example, AI videos created with Midjourney are not flagged, as I confirmed in my testing. Even if the video was created by Sora, but then played through a third-party app (such as a watermark removal app) and re-downloaded, that makes it unlikely the tool will identify it as AI.
The Content Authentication Initiative’s verification tool correctly identified that the video you created with Sora was created by AI, along with the date and time you created it.
If you use one of Meta’s social media platforms, like Instagram or Facebook, you might get a little help determining if something is AI-based. Meta has Internal systems are in place To help tag and categorize AI content as such. These systems are not perfect, but you can clearly see the rating of flagged posts. Tik Tok and YouTube It has similar policies for AI content classification.
The only reliable way to know if something was created by AI is for the creator to reveal it. Many social media platforms now offer settings that allow users to label their posts as generated by artificial intelligence. Even a simple credit or disclosure in your caption can go a long way to help everyone understand how you created something.
You know as you pass Sora that nothing is real. But once you leave the app and share AI-generated videos, it’s our collective responsibility to disclose how the video was created. As AI models like Sora continue to blur the line between reality and AI, it’s up to all of us to make it as clear as possible when something is real or AI.
There is no single foolproof way to accurately know at a glance whether a video is real or AI-powered. The best thing you can do to prevent yourself from getting scammed is to not automatically and unquestioningly believe everything you see online. Follow your gut – if something feels unreal, it probably is. In these unprecedented AI-filled times, your best defense is to closely examine the videos you watch. Don’t just glance and scroll away mindlessly. Check out distorted text, disappearing objects, and physics-defying animations. Don’t blame yourself if you get cheated from time to time; Even experts make mistakes.
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’s copyrights in training and operating its AI systems.)