Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
From Harry JohnsonCalmness
This story was originally published by CalmattersS Register about their ballots.
The California lawyer has to pay a $ 10,0000 fine to file a state court complaint full of fake offers generated by the Chatgpt artificial intelligence instrument.
The fine seems to be the biggest bubbles Stating that 21 of 23 quotes were drawn up, cited in the lawyer’s discovery information. He also notes that numerous outside countries and federal courts have encountered lawyers to refer to a false legal authority.
“Therefore, we publish this opinion as a warning,” it continued. “Simply stated, no short, application, proposal or other document filed in any court should contain quotes – whether it is provided by a generative AI or another source – that the lawyer responsible for the submission of the application has not checked and checked.”
The opinion, issued 10 days ago in the Second Court of Appeal in California, is a clear example of why the legitimate authorities of the state are competing to regulate the use of AI in the judicial system. State Judicial Council two weeks ago Guidelines requiring judges and court officials To prohibit generative AI or to adopt a generative policy for the use of AI by December 15. Meanwhile, the California Bar Association is considering whether to strengthen its code of conduct to take into account different forms of AI after a request by California’s Supreme Court last month.
Lawyer in the Los Angeles area fines last week, Amir Mostafavi told the court that he had not read a text generated by the AI model before filed the complaint in July 2023, months after Openai launched Chatgpt as capable of capable of Passing a bar examinationS The joint panel fines him for filing a frivolous appeal, violating the rules of the court, citing false cases and losing the time of the court and the money of the taxpayers, according to the opinion.
Mostafavi told Calmatters that he had written the complaint, after which he used Chatgpt to try to improve it. He said he did not know that this would add quotes to the case or invent things.
He believes it is unrealistic to expect lawyers to stop using AI. It has become an important tool, just as online databases largely replace the law libraries, and while AI Systems does not stop hallucinating counterfeit information, it offers lawyers who use AI to continue with caution.
“In the meantime, we will have casualties, we will have some damage, we will have remains,” he said. “I hope this example helps others not fall into the hole. I pay the price.”
The fine issued to Mostafavi is the most expensive punishment issued to a lawyer by the State Court in California and one of the highest fines ever issued for the use of AI lawyer, according to Damien Charlotin, who teaches a class under AI and the Law at a Business School in Paris. He songs Examples of lawyers citing false cases mainly in Australia, Canada, the United States and the United Kingdom.
In a widespread case in May, the US District Court judge in California ordered two law firms to pay $ 31,100 Defender fees and the cost of costs related to the use of “fake AI research generated”. In this decision, the judge describes the feeling that he was misled, said that they had almost cited false materials in court and stated that “a strong deterrent is needed to ensure that the lawyers do not succumb to this easy shortcut.”
“We will have remains.”
Amir Mostafavi, the lawyer fines $ 10,000 after submitting briefly filled with offers made by Chatgpt
Charlotin believes that the courts and the public should expect to see an exponential growth in these cases in the future. When he began to follow the court cases involving AI and fake cases earlier this year, he was confronted with several cases a month. He now sees several cases a day. Large language models confidently point the lie as facts, especially when there are no supportive facts.
“The more difficult your legal argument is to do, the more the model will tend to hallucinate, because they will try to please you,” he said. “That’s where the bias for confirmation begins.”
A Analysis of May 2024 The University of Stanford’s Reglab found that although three of four lawyers plan to use generative AI in their practice, some forms of AI generate hallucinations in one of three requests. Discovering fake materials cited in legitimate documents can become more difficult as models grow in sizeS
Other traceable From cases where lawyers cite non -existent legal powers because of the use of AI, he identifies 52 such cases in California and more than 600 across the country. This amount is expected to increase in the near future, as AI Innovation is ahead of the lawyer’s education, said Nicholas Sanctis, a law student at Ohio Legal University.
Jenny Wondrachek, who runs the tracking project, said he expects this tendency to get worse, as he still meets lawyers who do not know that AI creates things or believe that legal technological tools can remove any false or fake materials generated by language models.
“I think we will see a discount if (the lawyers) just understood the basics of technology,” she said.
Like Charlotin, she suspects that there are more cases of composed cases generated by AI in state court documents than in federal courts, but the lack of standard methods of filing makes it difficult to check this. She said she was confronted with false cases, most often among overloaded lawyers or people who decide to present themselves in the family court.
She suspects the number of arguments submitted by lawyers who use AI and cite fake cases will continue to rise, but added that not only lawyers are involved in practice. In recent weeks it has been documented three judges Referring to false legal powers in their decisions.
It has been documented three judges, citing false legal powers in their decisions.
As California considers how to treat generative AI and fake quotes, Wondracek said they could consider approaches taken by other countries, such as temporary stopping, requiring lawyers who are taking courses to understand better how to use ethically or require them to Learn the Students by Right how they can avoid making the same mistakeS
Mark McKena, Codierrector of the Institute of Technology, Law and Policy of UCLA, praised fines like the one against Mostafavi as punishing lawyers for “abdication of your responsibility as a party representing someone.” He believes that the problem “will get worse before it gets better” because there is a hurry among legal schools and private companies to accept AI without considering the right way to use them.
UCLA Law Professor Andrew Selst agrees, pointing out that officials working for judges are recent graduates of legal school and students are bombarded with the announcement that they should use AI or remain behind. Teachers and other professionals Report a feeling of such pressureS
“This is spoiling all our throats,” he said. “He is pushed into companies and schools and many places, and we have not yet fought the consequences of this.”
This article was Originally Published on CalMatters and was reissued under Creative Commons Attribution-Noncommercial-Noderivatives License.