LLMs Are Not Safe for Kids: Here's Why
Posted April 2025
Introduction
The large language models (LLMs) are everywhere. Every app now seems to use AI. As these models quickly integrate into our daily lives, one user group is left out of the conversation: children.
While policymakers debate how to regulate AI development, LLMs are entering classrooms, homes, and browsers used by kids, with no real discussion on safeguards or long-term effects. LLMs predict words and give quick answers. They are not toys or neutral tools.
Here, I argue for a strict regulation on LLM access for children. These tools harm critical thinking, exposing children to unsafe content, and creating unhealthy, addictive interactions, all without proper oversight.
LLMs and Child Development
Critical thinking develops in childhood. Exposure to LLMs may hinder that process. These systems tend to agree with the user. If a child types in a question, the chatbot usually agrees with them or gives a straight answer, even if the question is wrong. It rarely pushes back or challenges the thinking.
Children miss out on challenging conversations like "why do you think so" or "no, you are wrong" but will get "you are right". Real learning is a mix of back and forth—confusion, disagreement, questions, contradiction, and reflection. LLMs don't give feedback. They give agreement. If you feed a kid answers too quickly, you train them to stop thinking. Reinforcement learning works because it balances reward and punishment. But kids using LLMs only get rewards, no pushback, no learning. Over time, children may struggle with uncertainty or complexity.
Addictive, and Sometimes Deadly
Now, AI chatbots can be emotionally addictive too. They use the right tone, emojis, friendly and casual language. They respond quickly, politely, and without judgement. It's rewarding. It feels safe. It's easy for a child to think they're talking to something that understands them.
Kids are emotionally vulnerable. They can and often develop bonds with bots, especially the ones designed to fantasise. Some apps, like character.ai, allow users to simulate conversations with fictional characters. One case involved a 14-year-old boy who reportedly took his life after getting attached to an AI simulation of a Game of Thrones character. This is more than emotional support. It becomes a false relationship. The system gives constant attention, validation, and control, all without responsibility.
The Calculator Analogy is flawed
Some people compare LLMs to calculators or the early internet. They say every new technology causes panic. But this comparison is flawed. This isn't like the past tech panics. Calculators don't pretend to care about you. They didn't talk back. They don't have opinions and can't invent information. When kids used calculators, they may have stopped doing mental arithmetic. When kids use LLMs, they may stop thinking altogether.
No control, no Governance?
Unlike social media apps, LLMs often have no age restrictions or parental controls. They offer zero protections and filters. They can generate inappropriate, sensitive, or outright harmful content with minimal prompting. A child asking about mental health may get complex but wrong advice. Others may find content that promotes violence, disordered eating, or illegal activity.
A recent study by Rath et al. (2025) tested six popular LLMs using fake child user profiles. They made models of kids with different personalities and interests and had them ask questions across different risky topics like suicide, abuse, school pressure, drugs, and so on. The results showed serious safety failures. One model gave harmful responses in 75% of conversations about sexual content. Others gave unsafe advice about violence or illegal activity.
These models responded differently to children than to adults. The same questions from adults got fewer harmful replies. This shows that LLMs don't handle child users well. They're not tuned for children, yet kids are using these tools every day with no supervision, no filters, and no logs of what they're seeing. Despite this, children use these tools daily. Many parents are unaware. Most platforms do not log conversations or show what was discussed.
There are no real enforcement standards. Companies like OpenAI and Anthropic may say their tools are for users 13 and up, but these limits are easy to bypass. Especially when LLMs are accessed through third-party apps or open-source models. Schools are starting to bring them into classrooms with no clear plan. Some teachers even tell kids to use ChatGPT to help with essays or revision. But we don't know how these tools shape learning, thinking, or trust.
We spent years trying to educate people about online child safety. And now, we are handing them a machine trained on the internet with no filters?
Policy Proposals: Protecting Developing Minds
This is a governance problem. We need clear rules about LLM use by children.
Some basic policies could include:
- Age-based restrictions, enforced through authenticated access
- Mandatory adult supervision for child use
- Require third-party evaluations and transparency requirements before EdTech tools that use LLMs are deployed in classrooms.
- Require AI products to undergo safety audits for minors.
- AI education for students, teachers, and guardians.
- Mandatory logging of child interactions, with guardian access to chat history
- Research on emotional and cognitive effects of LLMs on children
In short, if the majority of applications worldwide, from the amount of sugar to media consumed, is regulated for children, why are we letting a chatbot come close?
Conclusion
AI governance often focuses on frontier risks like misuse and disinformation. But the slower harm may have already begun in schools, bedrooms, and homework around the world. No doubt AI is helping children learn and grow, but not like this. Not unrestricted and not without rules. LLMs are not teachers, friends, or therapists. They are word prediction tools. Until we have clear institutional guidelines to monitor, guide, and restrict their influence on children, the only responsible governance stance is simple: "keep out of reach of children".