How chatbots could spark the next big mental health crisis

New research from OpenAI shows that heavy chatbot usage is correlated with loneliness and reduced socialization. Will AI companies learn from social networks' mistakes?

How chatbots could spark the next big mental health crisis
A chart illustrates that the longer people spend with ChatGPT, the likelier they are to report feelings of loneliness and other mental health risks. (MIT/OpenAI)

This is a column about AI. My boyfriend works at Anthropic, and I also co-host a podcast at the New York Times, which is suing OpenAI and Microsoft over allegations of copyright infringement. See my full ethics disclosure here.

I.

Few questions have generated as much discussion, and as few generally accepted conclusions, as how social networks like Instagram and TikTok affect our collective well-being. In 2023, the US Surgeon General issued an advisory which found that social networks can negatively affect the mental health of young people. Other studies have found that the introduction of social networks does not have any measurable effect on the population’s well-being.

As that debate continues, lawmakers in dozens of states have passed laws that seek to restrict social media usage in the belief that it does pose serious risks. But the implementation of those laws has largely been stopped by the courts, which have blocked them on First Amendment grounds.

While we await some sort of resolution, the next frontier of this debate is coming rapidly into view. Last year, the mother of a 14-year-old Florida boy sued chatbot maker Character.ai alleging that it was to blame for his suicide. (We spoke with her on this episode of Hard Fork.) And millions of Americans — both young people and adults — are entering into emotional and sexual relationships with chatbots.

Over time, we should expect chatbots to become even more engaging than today’s social media feeds. They are personalized to their users; they have realistic human voices; and they are programmed to affirm and support their users in almost every case.

So how will extended use of these bots affect their human users? And what should platforms do to mitigate the risks?

II.

These questions are at the center of two new studies published on Friday by researchers from the MIT Media Lab and OpenAI. And while further research is needed to support their conclusions, their findings are both consistent with earlier research about social media and a warning to platforms that are building chatbots optimized for engagement.

In the first study, researchers collected and analyzed more than 4 million ChatGPT conversations from 4,076 people who had agreed to participate. They then surveyed participants about how those interactions had made them feel. 

In the second study, researchers recruited 981 people to participate in a four-week trial. Each person was asked to use ChatGPT for at least five minutes a day. At the end of the trial, participants filled out a survey about how they perceived ChatGPT, whether they felt lonely, whether they were socializing with people in the real world, and whether they perceived their use of the chatbot as problematic. 

The studies found that most users have a neutral relationship with ChatGPT, using it as a software tool like any other. But both studies also found a group of power users — those in the top 10 percent of time spent with ChatGPT — whose usage suggested more reason for concern.

Heavy use of ChatGPT was correlated with increased loneliness, emotional dependence, and reduced social interaction, the studies found.

“Generally, users who engage in personal conversations with chatbots tend to experience higher loneliness,” the researchers wrote. “Those who spend more time with chatbots tend to be even lonelier.”

(Quick editorial aside: OpenAI deserves real credit for investing in this research and publishing it openly. This kind of self-skeptical investigation is exactly the sort of thing I have long advocated for companies like Meta to do more of; instead, in the wake of the Frances Haugen revelations, it has done far less of it.)

Jason Phang, a researcher at OpenAI who worked on the studies, warned me that the findings would need to be replicated by other studies before they could be considered definitive. “These are correlations from a preliminary study, so we don't want to draw too strong conclusions here,” he said in an interview.

Still, there is plenty in here that is worth discussing.

Note that these studies aren’t suggesting that heavy ChatGPT usage directly causes loneliness. Rather, it suggests that lonely people are more likely to seek emotional bonds with bots — just as an earlier generation of research suggested that lonelier people spend more time on social media. 

That matters less for OpenAI, which has designed ChatGPT to present itself as more of a productivity tool than a boon companion. (Though that hasn’t stopped some people from falling in love with it, too.) But other developers — Character.ai, Replika, Nomi — are all intentionally courting users that seek more emotional connections. “Develop a passionate relationship,” read the copy on Nomi’s website. “Join the millions who already have met their AI soulmates,” touts Replika.

Each of these apps offers paid monthly subscriptions; among the benefits offered are longer “memories” for chatbots to enable more realistic roleplay. Nomi and Replika sell additional benefits through in-app currencies that let you purchase AI “selfies,” cosmetic items, and additional chat features to enhance the fantasy.

III.

And for most people, all of that is probably fine. But the research from MIT and OpenAI suggests the danger here: that sufficiently compelling chatbots will pull people away from human connections, possibly making them feel lonelier and more dependent on the synthetic companion they must pay to maintain a connection with.

“Right now, ChatGPT is very much geared as a knowledge worker and a tool for work,” Sandhini Agarwal, who works on AI policy at OpenAI and is one of the researchers on these studies, told me in an interview. “But as … we design more of these chatbots that are intended to be more like personal companions … I do think taking into account impacts on well-being will be really important. So this is trying to nudge the industry towards that direction.”

What to do? Platforms should work to understand what early indicators or usage patterns might signal that someone is developing an unhealthy relationship with a chatbot. (Automated machine-learning classifiers, which OpenAI employed in this study, seem like a promising approach here.) They should also consider borrowing some features from social networks, including regular “nudges” when a user has been spending several hours a day inside their apps. 

“We don’t want for people to make a generalized claim like, ‘oh, chatbots are bad,’ or ‘chatbots are good,’” Pat Pataranutaporn, a researcher at MIT who worked on the studies, told me. “We try to show it really depends on the design and the interaction between people and chatbots. That’s the message that we want people to take away. Not all chatbots are made equal.”

The researchers call this approach “socioaffective alignment”: designing bots that serve users’ needs without exploiting them.

Meanwhile, lawmakers should warn platforms away from exploitative business models that seek to get lonely users hooked on their bots and then continually ratchet up the cost of maintaining that connection. It also seems likely that many of the state laws now aimed at young people and social networks will eventually be adapted to cover AI as well.

For all the risks they might pose, I still think chatbots should be a net positive in many people’s lives. (Among the study’s other findings is that using ChatGPT in voice mode helped to reduce loneliness and emotional dependence on the chatbot, though it showed diminishing returns with heavier use.) Most people do not get enough emotional support, and putting a kind, wise, and trusted companion into everyone’s pocket could bring therapy-like benefits to billions of people.

But to deliver those benefits, chatbot makers will have to acknowledge that their users’ mental health is now partially their responsibility. Social networks waited far too long to acknowledge that some meaningful percentage of their users have terrible outcomes from overusing them. It would be a true shame if the would-be inventors of superintelligence aren’t smart enough to do better this time around.  

Sponsored

Power tools for pro software engineers.

There are plenty of AI assistants out there to help you write code. Toy code. Hello-world code. Dare we say it:"vibe code." Those tools are lots of fun and
we hope you use them. But when it's time to build something real, try Augment Code. Their AI assistant is built to handle huge, gnarly, production-grade codebases. The kind that real businesses have. The kind that real software
engineers lose sleep over. We're not saying your code will never wake you up again. But if you have to be awake anyway, you might as well use an AI assistant that knows your dependencies, respects your team's coding standards, and lives in your favorite editors such as Vim, VSCode, JetBrains and more. That's Augment Code. Are you ready to move beyond the AI toys and build real software faster?

Governing

Industry

Those good posts

For more good posts every day, follow Casey’s Instagram stories.

(Link)

(Link)

(Link)

Talk to us

Send us tips, comments, questions, and socioaffective alignment: casey@platformer.news.