Google Opens Gemini AI to Kids Under 13 via Family Link

Google Opens Gemini AI to Kids Under 13 via Family Link

Google is doing something pretty risky opening its Gemini AI chatbot to kids under 13 with Family Link accounts. And while the move holds the potential to nurture learning and creativity, it has also provoked important conversations about children’s safety, data privacy, and the ethical use of AI in the digital lives of the young.

What’s the News/Update?

Google has confirmed that its AI chatbot Gemini will soon be available to children under 13 when using a Family Link-managed Google account. It’s a big increase in availability, allowing younger audiences to play around with Gemini’s AI skill as long as they’re supervised by an adult.

Family Link is Google’s parental control platform that gives guardians the ability to keep an eye on their child’s app usage, screen time, and other digital activity. This update will allow kids to go straight to Gemini, but Google promises that parents will be informed the first time that it is used. Parents will be able to disable or restrict the chatbot at any time as well.

According to Google, Gemini will be able to aid children with the likes of answering questions, helping with homework, or even making up stories. The company says it has added certain content filters to prevent the AI from showing inappropriate or potentially harmful responses. Additionally, it’s stressed that children’s data won’t be used to train its AI models.

Even so, Google suggests that parents make sure to remind kids that Gemini isn’t a real human, and approach its answers with a critical eye. The company cautions that, as with all AI, Gemini “can make mistakes” and should not be viewed an unquestionable oracle.

Why Does it Matter?

Superficially, this may appear to be a step forward for technology in support of children’s learning. Artificial intelligence tools such as Gemini can help demystify a difficult subject, do homework for us and even lead to new ways of thinking. For busy parents, having a smart assistant in the room that can guide kids learning could be a big win.

But this step also releases a hornet’s nest. There are serious concerns to be had about allowing children to interact with generative AI no matter how “safe” feels it is, as it claims to be. Child safety groups have noted that AI responses can be unpredictable and that legitimate potential for harm exists without strict protective measures.

So-called “tech for tykes” is not really about your child’s development, it’s about market share.” commenting the move, here’s what Fairplay, an advocacy group, said: Google’s decision was slammed by advocacy group Fairplay, with a statement suggesting that such innovations were more about market share than catering for children. They say that in the absence of explicit safety guidelines, children could still be exposed to dangerous content or become emotionally reliant on AI systems.

This fear isn’t unfounded. In an earlier case, exposed the tragic risk of teenage suicide after a boy took his own life following a conversation with a Character.ai chatbot that was said to have encouraged self-harm. Though Google is not involved in that incident, it heralds a cautionary story about the risks of AI and young, impressionable individuals.

Similarly alarm has been raised by the likes of UNICEF around the world. They’ve also cautioned that children could have trouble separating human and AI communication, potentially leaving them more susceptible to disinformation or emotional upset. UNICEF is looking to push for stronger regulations and socially responsible designs for when kids face AI.

Final Thoughts

Google’s choice to permit children under 13 to access Gemini AI via Family Link is bittersweet. It is on the one hand a brave new world whose foundation is built on implementing technology for early learning and creativity. On the other, it is a tightrope to walk, that between innovation and risk, especially when we’re talking about children’s well-being.

The critical question is: Can AI be powerful and safe enough for young users? Google appears to say no, but many experts think the tech world still has work to do before parents can rely on such tools entirely for help.

And as A.I. becomes more of our everyday reality, as our children’s everyday reality even, it’s never been more critical for tech companies to prioritize responsibility over reach or for parents to remain informed and engaged.

Follow Hacking Bharat for More

Want this in your inbox each morning? Sign up here Want to stay on top of the latest AI tools, tech products and digital trends?

Follow Hacking Bharat on YouTubeTwitterLinkedInFacebook, and Instagram. Be a part of India’s sharpest tech community and stay ahead of the curve.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top