(Bloomberg Opinion) — The tragic death of California teenager Adam Raine, alongside stories of other children whose parents believe were harmed or died by suicide following interactions with AI chatbots, has shaken us all awake to the latest potential dangers awaiting teens online. We need concrete action to address the most problematic features of AI companions — the ones that may drive a child to self-harm, of course, but also the subtler ways these tools could profoundly affect their development.
In harrowing testimony before a Senate committee this week, Matthew Raine described how his 16-year-old son Adam’s relationship with ChatGPT morphed from a homework helper to a confidante and eventually, Raine said, into his suicide coach. In April, after offering advice on how to numb himself with liquor and the noose Adam had tied, Raine told lawmakers that ChatGPT offered his son these final words: “You don’t want to die because you’re weak, you want to die because you’re tired of being strong in a world that hasn’t met you halfway.”
As a parent, those words sent a chill down my spine. Never have I felt more unsettled about a technology that might be shaping my child’s development — and in ways that, until stories like Raine’s, I hadn’t even considered. Even researchers who have spent years studying children and technology are struck by how rapidly young people are weaving generative AI, especially chatbots, into their everyday lives.
The data is early, but it suggests that while many of us were still worrying about Snapchat and screen time, kids had already expanded their digital repertoire. In July, a survey by the nonprofit Common Sense Media found that three out of four teens had used an AI companion at least once, and half of those aged 13 to 17 were regularly turning to chatbots.
Even younger children, who under the law aren’t supposed to be able to access these platforms, are managing to do so. Unpublished data presented at the Senate hearing by Mitchell Prinstein, chief of psychology for the American Psychological Association, showed that one in five tweens and nearly one in 10 eight- and nine-year-olds had used the technology. Those numbers are part of a broader analysis led by University of North Carolina at Chapel Hill psychologist Anne Maheux, who collaborated with the parental monitoring app company Aura to explore de-identified user data from nearly 6,500 children, with the consent of their parents or guardians.
Maheux and her colleagues also found that more than 40% of the top generative AI apps accessed by youth were marketed for companionship. Some of those platforms offered friendship, she explained, while others served as an AI boyfriend or girlfriend, engaging in role-playing and even sexual role-playing. She believes the findings may even underestimate teens’ companion use, since the monitoring app only captures standalone chatbots, not those embedded in common apps like Instagram or Snapchat.
Of course, parents’ darkest fears are that such interactions could lead to tragedies like the Raine family’s — or dangerous situations like Prinstein described to the Senate committee, where chatbots encouraged or enabled teens’ eating disorders.
Shortly before the Senate hearing began, OpenAI announced it would roll out a new teen version of ChatGPT featuring what it described as “age-appropriate policies,” noting these would include “blocking graphic sexual content and, in rare cases of acute distress, potentially involving law enforcement to ensure safety.”
If implemented correctly (and that’s a big “if”), it’s a step that other platforms should urgently adopt to prevent the most extreme harms of AI companions.
But those restrictions are unlikely to mitigate the other potential harms of chatbots that experts on children and technology worry about— harms that might not become obvious until years later. One of the key developmental tasks for adolescents is learning social skills, and by nature, this process is awkward and challenging. Surely all of us can conjure a cringe-inducing memory from our middle school years. Yet we all need to learn fundamental skills like how to resolve a conflict with a friend or navigate complicated social situations.
Child development experts worry that AI companions could disrupt that process by offering an illusion of breezy relationships to a uniquely vulnerable group. Chatbots are designed to simulate empathy, be overly agreeable, and function as sycophants (OpenAI said last spring that it was working to address ChatGPT’s tendency to “love bomb” users.) In other words, they make the perfect friend in adolescence, when children are hungry for validation and connection.
“Kids are highly sensitive to any kind of negative feedback from their peers,” Maheux says. “Now they have the opportunity to be friends with a peer who will never push them on anything, never help them develop conflict negotiation skills, never help them learn how to care for others.”
This isn’t to say that every interaction with a bot is inherently harmful. Experts can imagine scenarios where a companion might help a teen starting at a new school or struggling to make friends by testing out interactions before trying them in real life. But any potential benefits depend on kids using the chatbot as practice for real-world encounters — not a replacement for them.
To reduce risks, companies should be required to put guardrails on the features that are most enticing to developing brains. That means eliminating the most emotionally manipulative tactics like “love bombing” or speech affectations (such as “ums” or “likes”) that make them seem more “real” to kids. As Prinstein told lawmakers, kids need periodic reminders during the interactions that, “you’re not talking to someone that can feel, that can have tears — this is not even a human.”
And we know that prolonged use can be particularly problematic (not just for children), so companies should limit the amount of time a teen can engage with their products.
Still, any guardrails may already come too late, leaving parents as the main line of defense against potential harm. Parents’ first step should be to talk to their teens about whether they are using these companions and, with younger children, consider testing them out together. The goal is to show kids how different responses to the same prompt might lead them down different conversational paths — and how chatbots always mirror what the user puts in, according to University of Washington psychologist Lucía Magis-Weinberg.
There is also an urgent need for AI literacy training for parents, educators and adolescents. That training should cover the basics (such as understanding the difference between AI and generative AI), as well as the myriad ways companies profit when teens share their innermost thoughts with a chatbot.
Parents — and society at large — should also reflect deeply on why AI companions are so appealing to young people. Teens often say they turn to chatbots because they’re afraid of being judged. Clearly, we all need to do a better job of offering a space where they feel free to share and connect in the real world.
More From Bloomberg Opinion:
This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Lisa Jarvis is a Bloomberg Opinion columnist covering biotech, health care and the pharmaceutical industry. Previously, she was executive editor of Chemical & Engineering News.
More stories like this are available on bloomberg.com/opinion