Parents can soon prevent their children from interacting with AI-powered chatbots on Instagram


TV, social media, fast food – parents are forever setting limits on their children. Now add Chatbots powered by artificial intelligence To the list. Starting in 2026, parents will be able to block teens from interacting with AI-powered chatbots on Instagram, Meta announced on Friday. Parents will be able to block access entirely or block access to specific AI characters.

Meta, the owner of Instagram, Facebook, and WhatsApp, added parental controls months later A report came out In August, it showcased the company’s AI guidance that allowed chatbots to “engage a child in romantic or sensual conversations.” Another report released earlier this month stated that 3 in 5 children between the ages of 13 and 15 face this problem. Unsafe content or spam On Instagram.


Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.


The company said In a post on Friday The AI ​​chatbot’s new controls align with parents’ concerns about “who (kids) are interacting with, what type of content they’re watching, and whether they’re spending their time well.”

CNET AI Atlas badge

Liu/CNET

“We hope today’s updates bring parents some peace of mind that their teens can make the most of all the benefits AI has to offer, with the proper guardrails and supervision in place,” Instagram President Adam Mosseri and Chief AI Officer Alexander Wang said in a blog post.

How parents can control AI chatbot interactions

Example of chatbot control in Instagram

dead

Teens can interact with AI-powered chatbots through Instagram’s direct messaging section. Chat can be with the creator’s own AI, a custom AI character, or the Meta’s general use AI.

The new controls allow parents to turn off their teens’ access to one-on-one chats with AI characters entirely or block specific AI characters if they don’t want to stop access to AI characters completely, Meta said.

Furthermore, parents can “gain insight into the topics teens are talking about using AI personas.”

The company did not explain in detail how parents will be able to know what artificial intelligence topics their children are talking about.

Teens can still use Meta’s regular AI assistant “with age-appropriate default safeguards to help keep teens safe.”

Expert: Controls are “insufficient”

James Steyer, founder and CEO of a digital advocacy and research non-profit Common sense mediacalled Meta’s new AI-powered chatbot, controls “interactive privilege” and is inadequate.

“META’s refusal to address the safety of our children with the urgency it requires is very disappointing but unfortunately not surprising,” Steyer told CNET. “For too long, this company has put its relentless pursuit of engagement ahead of the safety of our children, ignoring the warnings of parents, experts, and even its own employees.”

No one under 18 should use Meta AI chatbots “until their basic safety faults are fixed,” Steyer said.

A Meta representative says the company is continuing to improve safety.

“We have already gathered high-level input from experts that shaped our initial thinking, and we will continue to work with experts and parents to help ensure a thoughtful, privacy-conscious approach,” the representative told CNET.

More guardrails for AI chatbots

Meta also described additional protections around AI-powered chatbots and teens:

  • The AI ​​characters are “designed not to engage in age-inappropriate discussions about self-harm, suicide, or disordered eating.”
  • AI characters can only focus on “age-appropriate topics such as education, sports, and hobbies.”
  • Parents can see if their teens are talking to AI characters.

Earlier this week, Instagram said it would only allow teens to see the content.Similar to what they see in a PG-13 movie“, under its new guidelines for Teen accounts.



Leave a Reply

Your email address will not be published. Required fields are marked *