The next legal frontier is your face and artificial intelligence


This is it Step backa weekly newsletter covering one essential story from the world of technology. For more on the AI ​​legal quagmire, follow along Adi Robertson. Step back It arrives in our subscribers’ inboxes at 8 a.m. ET. Subscribe to Step back here.

The song was called “Heart on My Sleeve,” and if you didn’t know it better, you might think you were hearing Drake. If you are an act And I knew you were hearing the starting bell of a new legal and cultural battle: the fight over how AI services should be able to use people’s faces and voices, and how platforms should respond.

Back in 2023, Drake’s AI-generated fake song “Heart on My Sleeve” was new; However, the problems it presented were obvious. The song’s close imitation of the artist aroused great musicianship. The streaming services removed it for technical and copyright reasons. But the creator was not directing directly Copy Of anything – just a very close imitation. So attention quickly turned to A separate area of ​​the law of similarity. The industry was once synonymous with celebrities seeking endorsements and unauthorized parodies, and with the proliferation of audio and video deepfakes, it seemed like one of the few tools available to regulate them.

Unlike copyright, which is governed by the DMCA and multiple international treaties, there is no federal law about likeness. It’s a patchwork of different state laws, none of which were originally designed with AI in mind. But the past few years have seen a wave of efforts to change that. In 2024, Tennessee Governor Bill Lee and California Governor Gavin Newsom — both states that rely heavily on their media industries — have signed bills expanding protections against unauthorized copying of artists.

But law is expected to move more slowly than technology. Last month, OpenAI launched Sora, an AI-powered video generation platform specifically aimed at capturing and remixing the likenesses of real people. It has opened the floodgates to an avalanche of often stunningly realistic deepfakes, including people who did not consent to their creation. OpenAI and other companies are responding by implementing their own similarity policies — which, if nothing else, could become the new rules of the road for the Internet.

OpenAI has He denied that he was reckless Sora launch, with CEO Sam Altman claiming that if anything, it was “too restrictive” in terms of guardrails. However, the service still generates a lot of complaints. It was launched with minimal restrictions on the skins of historical figures, only for Reverse path After the estates of Martin Luther King Jr. complained about the “disrespectful depiction” of the assassinated civil rights leader. Expressing racism or committing crimes. It promoted subtle restrictions on unauthorized use of likenesses of living people, but users have found ways around it to put celebrities like Bryan Cranston in Sora videos doing things like Take a selfie With Michael Jackson, which led to Complaints from SAG-AFTRA Which prompted OpenAI to reinforce guardrails in unspecified ways there as well.

Even some people who an act The results were unstable because of the results, including for women, All kinds of fetish output. Altman said he didn’t realize people might be like that “In-between” feelings. About permissible similarities, such as not wanting to make public appearances “saying offensive things or things they find very problematic.”

Sora has been addressing the issues with changes like tweaking its historical figures policy, but it’s not the only AI-powered video service, and things have gotten — overall — pretty weird. Artificial intelligence has become essential to the administration of President Donald Trump and some other politicians, including blatant or overtly racist depictions of specific political enemies: Trump responded to the “No Kings” protests last week with a video that showed him dropping shit on someone Looks like liberal influencer Harry Sissonwhile a candidate for mayor of New York City Posted by Andrew Cuomo (And quickly deleted) A video titled “Criminals by Zahran Mamdani” shows his Democratic opponent devouring a handful of rice. like Kat Tenbarge dated Spitfire News Earlier this month, AI-powered videos became a fixture in poignant dramas as well.

There’s an almost constant potential threat of legal action over unauthorized videos, as celebrities like Scarlett Johansson has done law Excessive use of proverbs. But unlike the copyright infringement allegations of Amnesty International, which generated Several high-profile lawsuits and Almost constant deliberations Within regulatory agencies, few incidents of similarity have escalated to this level – perhaps in part because the legal landscape remains in flux.

When SAG-AFTRA thanked OpenAI for changing Sora’s guardrails, they used the opportunity to promote Indigenous Stewardship Act, promote artistry, and keep entertainment safe (no fakeouts), A years-old attempt to codify protection Against “unauthorized digital replicas.” the Anti-counterfeiting lawwhich also received support from YouTube, offers nationwide rights to control the use of a “hyper-realistic computer-generated electronic representation” of a living or dead person’s voice or visual appearance. It includes liability for online services that intentionally allow unauthorized digital replicas as well.

The Prevention of Counterfeiting Act has drawn heavy criticism from online free speech groups. The EFF called it A “new censorship infrastructure” mandate that forces platforms to filter content on a very large scale will inevitably lead to unintended takedowns and “veto power” on the Internet. The organization warned that the bill includes snippets of parody, sarcasm and comments that should be allowed even without a permit, but would be “cold solace for those who cannot afford to file a lawsuit.”

Opponents of the Anti-Counterfeiting Act can take solace in the lack of legislation Congress is able to pass these days — we’re currently living The second-longest federal government shutdown in historyThere is also a separate payment for Amnesty International banned the state Which could overturn the new similarity laws. But in practice, similarity rules are still coming. Earlier this week, YouTube announced that it would allow creators into the Partner Program Search for unauthorized downloads Using their photos and requesting their removal. This move expands on existing policies that, among other things, allow for this Music industry partners take down Content that “mimics the artist’s unique singing or rapping voice”

In all of this, social norms are still evolving. We’re entering a world where you can easily make a video of almost anyone doing almost anything — but when He should You? In many cases, these expectations remain within reach.

  • Most of these recent conversations are about AI videos of people simply doing weird or silly things, but historically, research suggests that’s why so many people do weird or silly things. The vast majority are deepfakes They were pornographic images of women, often taken without their consent. Outside of Sora, there’s a whole different conversation about things like Output of artificial intelligence servicesand Legal issues Similar to that of others Non-consensual sexual images.
  • On top of the basic legal issue of when a likeness is unauthorized, there are also questions like when a video might be defamatory (if realistic enough) or harassing (if it’s part of a larger pattern of stalking and threats), which can make individual situations more complex.
  • Social platforms used to be almost always protected from liability by Section 230, which states that they cannot be treated as a publisher or speaker of third-party content. As more and more services take the active step of helping users create content, how well Section 230 protects the resulting photos and videos seems like a fascinating question.
  • Despite long-standing fears that AI will make it truly impossible to distinguish between illusions and reality, it is still often easy to use context and “tell” (from specific editing movements to… Clear water marks) to see if the video was created by artificial intelligence. The problem is that many people don’t look closely enough or simply don’t care if it’s fake.
Follow topics and authors From this story to see more like this in your personalized homepage feed and receive email updates.


Leave a Reply

Your email address will not be published. Required fields are marked *