Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

YouTube is expanding its new “similarity detection” technology, which identifies artificial intelligence-generated content, such as deepfakes, to people in the entertainment industry, the company said. Announce Tuesday.
The technology works similar to that of YouTube Content ID systemwhich detects copyrighted material in user-uploaded videos, allowing rights holders to request removal or share in video royalties.
Similarity detection does the same thing, but for simulated faces. This feature is intended to help protect creators and other public figures from having their identities used without their permission — a common problem with celebrities who find their images have been used in fraudulent ads.
The technology was first made available to A A subset of YouTube creators In a pilot program last year before Expand more widelyto include Politicians, government officials and journalists This spring.

Now, YouTube says the technology is now available to those working in the entertainment industry, including talent agencies, management companies and the celebrities they represent. The company has support from major agencies such as CAA, UTA, WME, and Untitled Management, which have provided feedback on the new tool.
Using the similarity detection tool does not require artists to have their own YouTube channels.
Instead, the feature searches through AI-generated content to detect visual matches to a registered participant’s face. Users can then choose to request the video be removed privacy policy violations, Submit a copyright removal request, or do nothing. YouTube indicates that it will not remove all content, as it allows satirical and satirical content under its rules.
The company says the technology will also support voice in the future.
In connection with this, YouTube has also called for similar protections at the federal level, through its support of Anti-counterfeiting law In Washington, D.C., this would regulate the use of artificial intelligence to create unauthorized recreations of an individual’s voice and visual likeness.
The company has not yet said how many AI deepfake takedowns have been managed by the tool so far, but noted in March that the volume of takedowns was still “very small.”