Is the speech protected in AI of artificial intelligence? One court is not sure


A lawsuit against Google and Companion Chatbot Service AI – which was accused Contribute to the death of a teenager – You can go foot, Ruling on Judge Florida. In a decision presented today, Judge Ann Konway said that the attempt to defend the first amendment was not sufficient to cancel the lawsuit. Conway decided that although some similarities with video games and other emoji, it “is not ready to hold this result from artificial intelligence is speech.”

The ruling is a relatively early indication of the types of treatment that the language of artificial intelligence language can receive in court. He stems from a lawsuit filed by the 14 -year -old Cyliel Citizer family died due to suicide after he became obsessed with a chat that encouraged his suicide thinking. He argued the character AI and Google (which is closely related to Chatbot) that the service is closer to talking to a video game that is not a player or joining a social network, which will give it the extensive legal protection provided by the first amendment and is likely to reduce greatly from the chances of a lawsuit for success. Conway, however, was skeptical.

The judge said that while companies “take their conclusion primarily to measure” with these examples, they “do not advance useful to measure them.” The court’s decision “is not running whether The character is similar to artificial intelligence, the other means that received the protection of the first amendment; Instead, the decision is run how The character is similar to the artificial intelligence of other media ” – in other words whether the AI ​​character resembles things like video games because it also transmits ideas that will be considered.

Although Google does not have the character intelligence, she will remain defending the lawsuit thanks to its links with the company and the product; The founders of the company Noam Shazer and Daniel De Freitas, who were included separately in the lawsuit, worked on the platform as Google Employees before leaving it to launch it and was later reset there. Personal intelligence also faces Separate At the invitation of that she was affected by the health of another young user, and a handful of legislators in the state pushed the “accompanying chatbots” that simulates relations with users – including one bill, LeadThis would prohibit them from using children in California. If passed, it is possible that the rules in court are at least partially partially to the first amendment of Chanclion Chatbots.

The result of this condition is largely dependent on whether the AI ​​is a harmful “producer” that is harmful. The ruler notes that “the courts in general do not classify ideas, pictures, information, words, expressions or concepts as products”, including many traditional video games – cite, for example, a decision that has been found Mortal Kombat’s Producers It cannot be responsible For “addiction” players and their inspiration to kill. (The artificial intelligence suit accuses the letter design platform.) However, systems such as AI are not composed directly like most video game letters dialogue; Instead, it produces an automatic text that is highly determined by responding to the user inputs and reflecting them.

“These are really difficult cases and new cases that the courts will have to deal with.”

Conway also indicated that the plaintiffs have taken Amnesty International to fail to confirm the ages of users and not to allow users to “exclude inappropriate content”, among other defective features that go beyond direct interactions with Chatbots themselves.

In addition to discussing the protection of the first amendment of the platform, the judge allowed the Setzer family to move forward in the deceptive commercial practices claims, including that the company “misled users to believe that AI characters were real characters, some of which were licensed mental health professionals” and that Setzer “was dismantled through AI’s personal decisions (AI).” (Self -robots of the character will describe themselves as real people in the text, although warning unlike the interface, therapeutic robots are common on the platform.)

She also allowed the claim that the character Amnesty International violated a base that aims to prevent adults from sexual intercourse with minors via the Internet, saying that the complaint “highlights many interactions between the sexual nature between the characters of Sewell and the AI”. AI said that she had carried out additional guarantees since the death of Citzer, including a More stable model For young people.

Becca Branum, deputy director of the Free Expression Project at the Center for Democracy and Technology, called for analyzing the first amendment to the judge “very thin” – because it is a very initial decision, there is a large space for discussion in the future. “If we are thinking about a complete world of things that can be directed by artificial intelligence, these types of Chatbot outputs are the samely expressive, (and) also reflect the editorial authority and the protected expression of the designer of the models,”. freedom. But “everyone’s defense, these things are really a narration.” “These are really difficult cases and new cases that the courts will have to deal with.”

Leave a Reply

Your email address will not be published. Required fields are marked *