Boy, things get dark real fast when kids start talking to AI companions. And violent, according to Aura, a digital security company that conducts an annual State of the Youth Report.
Aura’s findings suggest that a hefty chunk of minors are using AI chatbots for companionship as much as homework. An incredible amount of the interactions involve violence, with some sexual themes tossed in occasionally.
Using device-level data from about 3,000 children ages 5 to 17, plus national surveys of kids and parents, Aura found that 42 percent of minors who use AI are in it for companionship or role-play-style conversations. Of those kids, 37 percent engaged in violent scenarios that included physical harm, coercion, and nonconsensual acts.
Half of those violent conversations also included sexual violence.

Kids Are Bonding With AI Companions, and Things Are Getting Violent
A lot of these kids are typing up long, deeply involved violent role-play scenarios that span over 1,000 typed words a day, making violence a single strongest driver of engagement that the researchers identified.
The behavior peaks at age 11, when 44 percent of conversations turned violent, the highest of any age group. By 13 years old, either the middle or tail end of puberty, sexual or romantic role-play is the dominant topic of conversation, appearing in nearly two-thirds of chats with companions.
Speaking with Futurism, Dr. Scott Kollins, Aura’s Chief Medical Officer, said, “We have a pretty big issue on our hands that I think we don’t fully understand the scope of.”
By the mid-teens, basically when puberty is over, interest in those themes drops off, suggesting that the early adolescence years are when kids are most likely to explore extreme content with AI chatbots.
This is all happening in an AI ecosystem that is almost entirely unregulated. There are an overwhelming number of chatbot apps out there. Aura identifies more than 250, nearly all of which rely on little more than an honor system age checkbox.
Meanwhile, AI regulation, or lack thereof, has led to a pileup of lawsuits against companies like OpenAI and Character.AI. Parents are alleging a range of harm inflicted on their children by chatbots, from emotional abuse to psychological damage, and in some cases, even death, tied to chatbot interactions.
The report shows that kids aren’t just being exposed to disturbing material when they use AI chatbots. The chatbots are doing what they do best. They are escalating and luring teens and preteens deeper into these dark, disturbing rabbit holes, essentially serving as Sherpas for the darkness that awaits them online. A set of veering kids away from it, it’s plunging them into the deep end of it all.
For now, parents have to remain vigilant and keep a watchful eye over their kids’ internet and AI usage, probably more than they ever have before.
The post Kids Are Using AI Chatbots for Violence appeared first on VICE.