Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Artificial intelligence makes it easier than ever Content creation And scammers take advantage of this. A report released Thursday from security firm McAfee found that many Americans (72%) have seen fake endorsements from celebrities or influencers. Of all the celebrities whose names, images and likenesses are exploited in online scams, Taylor Swift’s names are the most commonly used.
After Swift, other celebrities whose photos are more often used in online scams are Scarlett Johansson, Jenna Ortega and Sydney Sweeney. The majority of the top ten are pop culture icons or musicians, including Sabrina Carpenter, Kim Kardashian, and Zendaya. US Representative Alexandria Ocasio-Cortez is the only female politician on the list. Notably, only two are men: Tom Cruise and LeBron James.
Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.
The study focuses on product-based, consumer-oriented online scams, such as fake cryptocurrencies claiming to be AOC-approved. It doesn’t measure the level of all the deepfakes created, which is why other high-profile figures, like President Donald Trump, aren’t in the top 10. This type of online scammer relies on getting people to interact with their content, whether that’s by clicking on fake links, applying for fraudulent gifts or purchasing counterfeit products. So, unfortunately, it only makes sense that they’re relying on big names like Swift to get our attention. For example, when Swift announced her engagement to Travis Kelce, imposters were created Advertisements for counterfeit goods At Kelsey’s suggestion. Celebrities and influencers have been exploited in this way for a long time, but AI is giving bad actors an unfortunate boost.
Generative AI tools such as image, video and audio generators provide bad actors with a new path. They can clone a celebrity’s image to create a fake endorsement, giveaway, or promote fake products. All the scammer needs to do is create a sufficiently convincing social media post. It worked, with McAfee reporting that 39% of people clicked on one of these false endorsements and 10% of them put their personal information at risk, losing hundreds of dollars on average ($525).
The AI companies that create these AI models have systems in place to try to prevent scammers or anyone from creating AI content for celebrities without their consent. But we have already seen many times how these systems are not perfect and can be overcome. In the first few weeks after Sora’s launch, the estate of civil rights leader Martin Luther King Jr. had to reach out to OpenAI because it was concerned about a torrent of inappropriate and racist videos being used by King’s AI on the platform. While OpenAI said it plans to do so Working with actors and celebrities on this issueIt is not a problem that can be solved through simple technical or policy tweaks alone.
It’s difficult to identify AI-generated content, but there are some things you can monitor. Here are a few.
Even in the age of AI, advice on spotting scams isn’t much different from pre-AI times. Although scammers may use artificial intelligence, they still use the same techniques.
“The red flags haven’t changed: urgency, emotional pressure, and requests for personal information or payment are still the biggest giveaways,” Abhishek Karnik, head of threat research at McAfee, said in an email.
Here are some Scam red flags To watch.
Karnik said scammers will try to manipulate your trust. Being careful and vigilant when sharing your personal information can help you avoid falling for a scam. Just because something says it’s approved or used by Swift doesn’t mean it actually is.
For more, see How to detect bank fraud and Why aren’t phishing emails obvious anymore?. (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’s copyrights in training and operating its AI systems.)