Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Top officials in dozens of countries have seen the extent of generation AI chatbots And the characters, if dealt with poorly, can be bad for children. And they have a strict warning of this industry: “If you are aware of children, you will answer it.”
This message is clear in a letter This week, 44 public lawyers were sent to the heads of 13 Amnesty International. AGS said they are writing to tell the executives that they “will use every aspect of our authority to protect children from exploitation through predatory artificial intelligence products.”
Fears about the impact of artificial intelligence on children have been present for a while, but attention has increased in recent weeks. AGS is particularly martyred Modern report From Reuters, which showed dead instructions that allowed Amnesty International to involve children in “romantic or sensory” talks. The company told Reuters that the mentioned examples were “wrong and inconsistent” with the company’s policies, which prohibit the content that gives children sexual.
You did not respond immediately on a request for comment.
AGS said the issues were not limited to the definition. They wrote: “In the short history of the Parasis Chatbot relations, we have repeatedly seen that companies offer the inability or indifference to the basic obligations to protect children.”
Watch this: How to speak with seals. That is why
The risks of treacherous relationships and interactions with AI Chatbots are more pronounced. In June, the American Psychological Association issued a warning The call to use handrails around AI is the use of teenagers and young people, saying that parents should help their children use tools on a large scale. Fastage use AI chatbots as “processors” Increased people with harmful advice in the reaction when they are especially at risk. A study issued this week found that large language models are inconsistent with the response Questions about suicide.
At the same time, there are a few actual rules about what artificial intelligence developers can do and what he cannot do and how these tools can do. Moving to prevent countries from enforcing laws and rules about artificial intelligence Failure Earlier this year, but still there There is no federal framework How to make artificial intelligence safely. Legislators and preachers, like AGS, said in this week’s message that they want to avoid the atmosphere similar to everyone in the social media age, but whether it is still necessary to see clear rules in reality. Action plan from President Trump’s artificial intelligenceThe released in July focused on reducing the regulations of the artificial intelligence companies, not the introduction of new regulations.
Read more: Amnesty International: 29 ways you can make the Gen AI operating for you, according to our experts
The state said AGS that they would take things in their hands if necessary.
“You will be responsible for your decisions,” they wrote. “Social media platforms have caused great harm to children, partly due to the fact that government oversight bodies did not do their work quickly. The lesson is the possible damage to Amnesty International, such as possible benefits, dwarfing the impact of social media. We all wish you success in the race in order to dominate artificial intelligence. But we pay attention.”
If you feel that you or a person you know in immediate danger, call the 911 (or the local emergency line in your country) or go to the emergency room for immediate help. Explain it is a psychological state and ask a person trained in these types of situations. If you suffer from negative ideas or suicidal feelings, the resources will be available for help. In the United States, he called Lifeline to prevent national suicide in 988.
.