Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

There are a variety of copyright issues that aim to set limits on what AI companies can and cannot do with human-generated creative work. As with other decisions, the recent ruling in Getty’s AI copyright case could impact what AI tools are allowed to offer to their users.
In a London-based case brought against Stability AI by Getty Images, Judge Joanna Smith to rule On Tuesday, the artificial intelligence company, which makes the popular Stable Diffusion image models, did not violate copyright law in training those models. Smith said Stability AI did not violate the copyright protection of Getty images because it does not “store or reproduce any copyright works and has never done so.”
As with many AI-related lawsuits, the British court’s decision was narrow and precise rather than comprehensive. Smith determined that Getty succeeded “partially” when it claimed that Stability AI had violated its trademark protection by allowing its users to create images that resembled the iStock and Getty Images logos. She said that this success only applies under certain laws or laws.
Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.
Smith described her findings as “historical” and “very limited” in scope. It’s a sentiment that mirrors rulings from US courts, highlighting the lack of consensus among judges when it comes to dealing with copyright claims in the age of artificial intelligence.
The lawsuit in the United Kingdom was one of the first major cases involving a major content library, which alleged that an AI company acted illegally by deleting its content from the web. Companies like Stability AI need a huge amount of human-generated content to build their models. In cases involving similar allegations in the United States, Anthropic and dead It has largely prevailed over authors who alleged that their books were used to train AI data models without their permission or compensation.
Because of the complexities involved in Tuesday’s ruling, both companies found room to claim victory.
Getty called the outcome a win for intellectual property holders, given that the arbitrator said Stable Diffusion infringed Getty’s trademarks when it included them in AI-generated output.
Watch this: The hidden impact of the AI data center boom
“More importantly, the court rejected Stability AI’s attempt to hold the user liable for this infringement, asserting that responsibility for the existence of such trademarks lies with the model provider, which has control over the images used to train the model.” Getty said In a statement.
However, Smith’s ruling addressed secondary copyright claims made by Getty after it Its initial claims were dropped Earlier this year, a point that stable AI focused on.
“Getty’s decision to voluntarily dismiss most of its copyright claims at the conclusion of trial testimony left only a subset of claims before the court, and this final ruling finally resolves the copyright concerns that were the underlying issue,” Stability AI general counsel Christian Doyle said in a statement.
Smith stressed that her ruling is specific to the evidence and arguments presented in this particular case. This means that another similar case could have a different outcome, depending on the exact claim and the law being considered. Similar legal complications have occurred in other copyright infringement rulings.
U.S. copyright law has a long history of precedent, and a four-part test that judges must consider. However, the novelty of generative AI technology has raised a number of questions that courts must consider with advocates He argues The current law is insufficient to protect creators.
Each ruling we receive in these cases helps create a new set of precedents for courts to consider. For creatives, this new ruling means two things. Firstly, those using Stability AI in the UK should be able to continue to do so without hindrance. However, creators who are concerned about their work being used to train AI models still face the possibility of their digital content being included in training databases.