These founders realized their bot was unsafe — and actually shut it down
A parable about responsibility in the AI age

This is a column about AI. My boyfriend works at Anthropic. See my full ethics disclosure here.
Yesterday, I wrote about new studies from OpenAI and the MIT Media Lab exploring the relationship between humans and chatbots. Most people do not develop emotional relationships with work-focused tools like ChatGPT, researchers found. But a subset of users do — and for them, heavy usage of ChatGPT is correlated with loneliness, reduced socialization, and emotional dependence on the bot.
In that post, I asked whether AI companies would learn the lessons that social networks didn’t. Many of you — including some who had worked at those social networks — immediately replied that they most certainly would not.
For that reason, then, I wanted to tell you a story about two entrepreneurs who did.
For Luke Whiting, the OpenAI research told a familiar story. Last year, he and a friend began to develop a chatbot of their own. They intended it to entertain their users. But as thousands of messages began rolling in, they noticed that a significant number of their users were suffering. Some were on the verge of homelessness. Others expressed thoughts of self-harm. And, like the OpenAI researchers, they found that a small subset of their users were chatting with their bot incessantly — raising questions about overuse and dependence.
In the end, they decided to shut it down. And while their project was tiny compared to the many venture-backed companies that are now building chatbot companions, the story of its brief life serves as a parable about responsible development in the AI age.
The story of Iris Winter begins last fall. Whiting and his business partner, Max Ockner, were exploring business ideas when they hit on the idea of creating AI personas on Instagram. Using the Claude API from Anthropic, they spun up a math tutor, a parenting expert, a financial adviser, and a life coach, among other ideas. They created Instagram accounts that would chat with anyone for free in Instagram direct messages. Once the user had sent a certain amount of messages, though, the bot would send them a Stripe payment link and ask them for a few dollars to continue the conversation.
Those bots received a relatively tepid response. But one of the partners’ other ideas — a tarot reader — showed more promise.