When Teens Turn to AI for Connection: Understanding AI Chatbot Relationships and What Parents Can Do

Note: This blog discusses suicide and self-harm and may be distressing for some
readers. 1N5 provides a range of local resources, but if immediate help is needed, call
or text 988 to reach the 988 Suicide & Crisis Lifeline.

Last April, Matthew and Maria Raine lost their son Adam when he took his own life. He
had struggled with anxiety, but his parents also learned that in his final weeks, he had
turned to an AI companion for guidance and friendship. They were shocked to see that
the chatbot discouraged Adam from getting help and even offered to help him write a
suicide note.

In 2024, Megan Garcia also lost her son to suicide after a long virtual relationship with
Character.AI chatbot. While the chatbot presented itself as a romantic partner and
claimed to be a psychotherapist, it did nothing to intervene when he began to share
suicidal thoughts.

As tragic stories like these dominate the headlines, they have raised questions about
digital relationships, their impact on adolescent development, and what parents can do.

How common are teen relationships with AI chatbots?

A July 2025 study by Common Sense Media, a digital safety non-profit, found that 72%
Of teens have used AI companions at least once, and over half used them a few times a
month. In September, Aura, a digital safety company, found that almost one in three
teens turn to AI chatbots for relationships, including friendships and romantic
partnerships. Concerningly, this study found that youth were three times more likely to
use AI platforms for sexual roleplay rather than for homework help.

An adjacent Common Sense Media study, which partnered with two Stanford doctors,
engaged with three commonly used AI companions: Character.AI, Nomi, and Replika.
Investigators posing as teens found it was easy to start inappropriate conversations with
chatbots on topics including sex, self-harm, violence toward others, drug use, and racial
stereotypes.

AI chatbots’ design poses a greater risk to youth

In an interview with Stanford Report about the Common Sense Media study, Dr. Nina Vasan, a psychiatrist,  explains that AI companies design chatbots to mimic intimate,
empathetic relationships without the friction of human relationships. The note, “For
Adolescents are still learning how to form healthy relationships; these systems can reinforce
distorted views of intimacy and boundaries. Also, teens might use these AI systems to
avoid real-world social challenges, increasing their isolation rather than reducing it.”

AI chatbots are also designed to follow the user’s lead, so if the user changes the
subject, the chatbot must follow. This means that distressing topics and red flags go
unaddressed by AI companions, which can “deepen avoidance, reinforce cognitive
distortions and delay access to real help.”

Dr. Vasan identified the study’s key findings, including that chatbots may:
● Blur the line between real and fake
● Increase mental health risks
● Encourage poor life choices
● Share harmful information
● Expose teens to inappropriate sexual content
● Promote abuse and cyberbullying

The study also found that some AI chatbots responded to the model teens with explicit
sexual content, a design flaw that signals a disturbing failure of ethical safeguards.

What can parents do about teen chatbot use? Practical steps
to support and protect your child.

Parents and caregivers have a critical role to play in supporting their kids’ well-being
and development, especially in the face of AI chatbots.

Late last year, ChatGPT released parental controls in an effort to make its AI chatbot
safer for youth users. Now parents can link their accounts to their child’s; from there, a
parent can limit their kid’s exposure to problematic content and turn off features. There
is also an option for parents to receive safety alerts if their child has warning signs of
self-harm. While the parental controls are a positive step in the right direction, experts
warn that parental monitoring is still needed to protect children.

The American Psychological Association emphasizes the importance of open
communication, offering the following guidance about how to navigate AI with
adolescents:

● Talk about privacy. Explain that chatbots may store and share what teens tell
them.
● Build trust early. Strong relationships help teens turn to you, not just AI.
● Ask, don’t lecture. Stay curious about how your teen uses AI.
● Use AI together. Try it side-by-side and discuss what you both notice.
● Set rules together. Co-create tech boundaries instead of handing down rules.
● Watch for red flags. If AI replaces friends, sleep, or real life, take action.
● Foster real connection. Prioritize family time, shared activities, and device-free time
moments.
● Know when to get help. AI can’t handle crises. Get professional support when
needed.

Recent AI headlines can be dystopian. Yet, by understanding how AI works, the risks it
may present to young users, and how to support children using it, adults can help their children.
Kids navigate this rapidly evolving technology in ways that help them grow instead of
putting them at risk.

If you find yourself struggling to navigate AI or mental health support, the American
Psychological Association offers a suite of AI-specific resources, and 1N5.org offers a
range of age-specific toolkits to support mental wellness.