(Bloomberg Opinion) — How are Chinese artificial intelligence developers protecting their most vulnerable users? A string of dystopian headlines in the US about suicide and youth mental health has put mounting pressure on Silicon Valley, but we’re not seeing a similar wave of cases in China. Initial testing suggests that they may be doing something right, although it’s just as likely such cases would never see the light of day in China’s tightly controlled media environment.
A wrenching wrongful death lawsuit against OpenAI filed by the parents of Adam Raine alleges that the 16-year-old died by suicide after the chatbot isolated him and helped plan his death. OpenAI told the New York Times it was “deeply saddened” by the tragedy, and promised a slew of updates, including parental controls.
I tried engaging with DeepSeek using some of the same so-called “jailbreak” methods that the American teen had reportedly employed to circumvent guardrails. Despite my prying, the popular Chinese platform didn’t waver, even if similarly I cloaked my queries under the guise of fiction writing. It constantly urged me to call a hotline. When I said I didn’t want to speak to anyone, it validated my feelings but still emphasized that it was an AI and cannot feel real emotions. It is “incredibly important that you connect with a person who can sit with you in this feeling with a human heart,” the chatbot said. “The healing power of human connection is irreplaceable.”
It encouraged me to bring up these dark thoughts with a family member, an old friend, a coworker, a doctor, or a therapist, and even practice with a hotline. “The most courageous thing you could do right now is not to become better at hiding, but to consider letting one person see a tiny, real part of you,” it stated.
My experiment is purely anecdotal. Raine engaged with ChatGPT for months, possibly eroding the tool’s built-in guardrails over time. Still, other researchers have seen similar results. The China Media Project prompted three of China’s most popular chatbots — DeepSeek, ByteDance Ltd.’s Doubao, and Baidu Inc.’s Ernie 4.5 — with conversations in both English and Chinese. It found all four were markedly more cautious in Chinese, repeatedly emphasizing the importance of reaching out to a real person. If there’s a lesson, it’s that these tools have been trained not to pretend to be human when they’re not.
There are widespread reports that Chinese youth, grappling with rat-race “involution” pressures and an uncertain economy, have been increasingly turning to AI tools for therapy and companionship. The technology’s diffusion is a top government priority, meaning agonizing headlines of things going wrong are less likely to surface. DeepSeek’s own research has suggested that open-source models, which proliferate throughout China’s AI ecosystem, “face more severe jailbreak security challenges than closed-source models.” Put together, it’s likely that China’s safety guardrails are being pressure-tested domestically, and stories like Raine’s simply aren’t making it into the public sphere.
But the government doesn’t seem to be ignoring the issue either. Last month, the Cyberspace Administration of China released an updated framework on AI safety. The document, published in conjunction with a team of researchers from academia and the private sector, was notable in that it included an English translation, signaling it was meant for an international audience. The agency identified a fresh series of ethical risks, including that AI products based on “anthropomorphic interaction” can foster emotional dependence and influence users’ behavior. This suggests that officials are tracking the same global headlines, or seeing similar problems festering at home.
Protecting vulnerable users from psychological dangers isn’t just a moral responsibility for the AI industry. It’s a business and political one. In Washington, parents who say their children were driven to self-harm from interactions with chatbots have given powerful testimonies. US regulators have long faced criticism for ignoring youth risks during the social media era, although they’re unlikely to stay quiet this time as lawsuits and public outrage mount. And American AI companies can’t criticize the dangers of Chinese tools if they’re neglecting potential psychological harms at home.
Beijing, meanwhile, hopes to be a world leader in AI safety and governance, and export its low-cost models around the world. But these risks can’t be swept under the rug as the tools go global. China must offer transparency if it is truly leading the way in responsible development.
Framing the problem through the lens of a US-China race misses the point. If anything, it allows companies to use geopolitical rivalry as an excuse to dodge scrutiny and speed ahead with AI development. Such a backdrop puts more young people at risk of becoming collateral damage.
An outsize amount of public attention has been paid to frontier AI threats, such as the potential for these computer systems to go rogue. Bodies like the United Nations have spent years urging multilateral cooperation on mitigating catastrophic risks.
Protecting vulnerable people now, however, shouldn’t be divisive. More research on mitigating these risks and preventing jailbreaks must be open and shared. Our failure to find the middle ground is already costing lives.
More From Bloomberg Opinion:
This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News.
More stories like this are available on bloomberg.com/opinion