How do teens really use AI companions? With more creativity than you might think

RDNE/Pexels

In 2022, the founders of chatbot startup Character.AI launched a platform where anyone could create interactive characters powered by artificial intelligence (AI).

The app exploded, quickly growing to more than 20 million users who created more than 10 million chatbot characters.

Many of the users creating those characters were young people – until they weren’t. In November 2025, under mounting public and legal pressure surrounding youth suicides linked to its use, Character.AI banned users under 18. The decision was made after a number of attempts to improve youth safety, including parental controls and stricter content filters.

The ban is an attempt to keep teens safe from potential harm. But the more creative, playful and emotionally expressive AI experiments they were doing have also been silenced.

Our new research, published in the proceedings of the Association for Computing Machinery CHI Conference 2026, captures and preserves the new ways youth are experimenting with AI, so that we can build towards something better.

What do teens actually use AI chatbots for?

In 2026, three in ten US teenagers use AI daily. The idea of using AI for companionship has dominated media headlines and app stores, with hundreds of apps on offer.

Media coverage of AI companions taps into two primary fears. One is that young people will replace human friendships with AI. The other is that engaging with sycophantic chatbots instead of real people will result in teens losing their social skills.

These concerns are important. But companionship accounts for a surprisingly small share of why young people actually use AI. A recent Pew Research Center survey found the top uses by teens are seeking information (57%), doing homework (54%) and “for fun” (47%). Only a small percentage (12%) used AI for emotional support or advice. Romance and loneliness alleviation frequently rank among the lowest motivations for teen AI use: 4–6% and 8–11%, respectively.

When the public narrative almost exclusively frames AI chatbots as companions, it risks overlooking the bulk of how teenagers spend their time with AI.

Our team set out to understand what young people choose to do with AI when they’re free to use it outside of school contexts – seeking fun, messing around, and creating characters of their own design.

AI as entertainment

Before the ban, Character.AI was a popular “AI entertainment” destination for young people. It still has a viral TikTok channel, and has characters from popular youth media, from Peppa Pig to Call of Duty.

Our team spent more than eight months, between July 2024 and March 2025, immersed in Character.AI’s official community on online chat platform Discord, with more than 500,000 members. We systematically analysed 2,236 posts by young people aged 13–17. Of those users the majority, 68.2%, identified as female or non-binary; and 59% had created their own AI characters.

Through an analysis of youth discussion on the platform, we identified three core intents behind engagement with Character.AI: restoration, exploration and transformation.

Restoration

my favourite period comfort bot is Percy Jackson

Young people used characters for emotional comfort, venting, escapism and mood management. Rather than mirroring a formal clinical practice, we observed youth discussing “comfort bots” where young people engaged in soft, tender and gentle roleplay with familiar characters.

Beloved book characters would comfort people on their period, or characters from popular comics would give someone a pep talk for an upcoming math test.

Exploration

Character.AI has helped me find that creative spark within myself

Young people explored boundaries, engaged in creative world-building, and extended their fandoms. One teen wrote a three-book-long saga through character interactions. Another created a troupe of travelling theatre characters inspired by their love of theatre. They reported this use transferred skills into the real world, boosting creativity and improving their writing.

Transformation

I have characters who struggle with mental health issues and I tend to project on my personas during RP [roleplay]

Young people used AI to try on different identities, process real-life relationships, and re-author difficult real-life scenarios. Some people created “clones” of themselves, with superpowers or self-affirming versions of themselves.

Inspired by reality, they discussed creating characters that reflected real-world challenging relationships, such as “toxic friends”, “annoying sister”, or “foster care agent”.

Characters created with purpose

We also mapped seven distinct character archetypes young people were creating and discussing:

  • Soother – emotionally supportive figures
  • Narrator – a cast of characters for roleplays
  • Trickster – jesting, testing and transgressive chats
  • Icon – remixed celebrities or fandom figures
  • Dark Soul – angsty, emotionally complex characters
  • Proxy – modelled after real people in their lives, and
  • Mirror – clones of the self.

These archetypes are a central finding of our research. Instead of sycophantic or romantic chatbot engagement, young people are purposefully creating characters that are angsty, transgressive, playful, creative and reflective.

This shows we need to stop treating “companion AI” as if it’s one homogeneous thing. Treating AI chatbots as a single category is like treating all screen time as the same experience, whether a child is watching Bluey with family or doomscrolling short-form content at night, alone on their phone when they should be sleeping.

Towards better chatbots

The American Academy of Paediatrics recently shifted screen-time guidelines from set time limits to a framework that accounts for the individual child, their use, family relationships and their environment.

The same logic should apply to AI chatbots. This means moving beyond asking adults about their child’s use of AI, testing AI products with fake accounts that assume certain use cases, and banning access before listening to young people – their experiences, their experiments and their ideas for the future.

Banning is a reaction to bad design, but it doesn’t lead to better, safer AI products for teens.

The answer is not to permanently keep young people away from AI. Rather, it’s to build AI that deserves their trust, fosters their creativity and keeps them grounded in the physical world with families, friendships and communities.

The Conversation

Annabel Blake is a Design Researcher at Canva with a focus on AI, and conducted this research independently as part of their PhD.

Eduardo Velloso has recently received funding from Google. He has previously received research funding from Meta, Microsoft, and Snap.

Marcus Carter is a recipient of an Australian Research Council Future Fellowship (#220100076) on ‘The Monetisation of Children in the Digital Games Industry’. He has previously received funding from Meta, TikTok and Snapchat, and has consulted for Telstra. He is a previous president and board member of the Digital Games Research Association of Australia.

Scroll to Top