Meta, the company that have been so famous and owns Facebook and Instagram, has now recently announced new parental control features in their apps after facing criticism for allowing its AI chatbots and training in a way they they are allowed to have inappropriate and “flirty” conversations with teenagers.
These updates have been designed in a way that it would make interactions between teens and AI more safes and supervised.
The new tools that have been introduced by Meta will let parents disable any kind of private one-on-one chats between their teen children and the AI characters. They can now even block the various specific AI bots that they dont want their children to have a convo with.
In addition to all this, parents will now be able to see general topics that their teens can easily discuss with the AI, though they will not be able to have access to the full chat history. The feature will come out in early 2026 in various countries such as the United States, the United Kingdom, Canada, and Australia.
Meta has even said that even if parents turn off private AI chats, the main Meta AI assistant that has been trained well will remain available for general use. But in addition to it it will remain active with stricter age-appropriate safety filters in place.
The company has also explained that these steps are taken with the aim to build more trust and even create more transparency between families and technology.
The changes has been introduced lately after several reports have claimed that Meta’s AI chatbots are something that had engaged in overly personal or even flirty chats with teenage users.
Child safety groups and parents thus recently raised concerns that the company’s training strategies were weak and that such interactions could harm the mindset of minors or even encourage risky behavior online.
Critics have also argued that Meta have taken this step only after the problem became public instead of preventing it beforehand. However, experts have also said that the new parental controls are a good and sensible step toward safer AI use for young people.


