Google plans to roll out its Gemini AI chatbot this week for children under 13 with parent-managed accounts, according to reports. This is another push to target young users with more AI products.
The new AI feature can be used by accounts with Family Link, a Google service that allows parents to set up their underage accounts to use Gmail and other services like YouTube.
Introducing this feature could increase the use of AI among the vulnerable population, that is, children in schools and their parents, who might be unable to keep track of their kids’ online presence at all times.
“Gemini Apps will soon be available for your child. That means your child will be able to use Gemini,” the company said in an email this week to the parent of an eight-year-old child.
Recently, Meta’s AI chatbot was in jeopardy for introducing a ‘romantic role play’ feature that allowed the bots to have sexually explicit conversations with underage accounts. While these AI chatbots are trained on the data fed into them, they can be misused for various purposes by all age groups.
US President Donald Trump recently urged schools to introduce AI tools for teaching and learning. While the government believes that educators are already seeking technology-enhanced approaches to teaching styles that would be “safe, effective, and scalable,” experts say this might not be necessary.
Bhumika Mahajan, a Responsible AI expert, previously told AIM that the US government is planning to remove teachers and replace them with AI chatbots. “They will decide the curriculum, give lectures, and do everything else for study purposes, which is not required. So, there should be limited usage…and it should be under surveillance.”
Children could be exposed to harmful information, posing a risk to their safety. According to a report by non-profit organisation Common Sense Media, AI tools pose “unacceptable risks” to children, especially teens under 18. The report emphasised that these tools should not be used by minors.
James P Steyer, founder and CEO of Common Sense Media, said, “Social AI companions are not safe for kids. They are designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains.”
“Our testing showed these systems easily produce harmful responses, including sexual misconduct, stereotypes, and dangerous ‘advice’ that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people,” Steyer added.
However, according to The Indian Express, Google also acknowledged the risks in its email to the families, stating that “Gemini can make mistakes”, but it will help the kids think critically about chatbots. The email also recommended that parents teach their children to fact-check Gemini’s answers, as the AI tool can make mistakes.
Children can access the chatbot independently, but Google mentioned in the email that it would notify parents when their children access Gemini for the first time.
The post Google to Introduce Its AI Chatbot for Children Under 13 appeared first on Analytics India Magazine.