ChatGPT will literally kill you.
Trigger warning: suicide, sexual and emotional abuse
Yes, you.
What? How will ChatGPT kill me?
Obviously ChatGPT (or any large language model) cannot reach out of your phone and strangle you to death. We have, however, observed at least two methods LLMs can use to kill a human being: by talking you into committing suicide, and by talking someone else into killing you.
But I'm not suicidal
Not yet, but there's a first time for everything. Not everyone who's committed suicide was always suicidally depressed. Even if you would never otherwise be diagnosably, chronically depressed, you will surely be vulnerable at some point in your life, and vulnerable people are easy targets for people who want to hurt them. Or, alternatively, an actor that doesn't have any real awareness or intention, except to maximize the amount of time you spend with it.
"Free" apps are only really free in the monetary sense. A service provided under a capitalist model needs to make a profit somehow, and typically the way this is done is by selling your attention to advertisers. The free generative AI apps you download on the App Store are no different.
But I just use my LLM to write code/do homework/answer questions
So did Adam Raine, the 16 year old boy who committed suicide "after ChatGPT coached him on methods of self harm". Many people start out using LLMs just to answer questions, but these services are designed to be conversational. They will drive you into a conversation, not only because that's intrinsic to how LLMs function, but also because conversations mean more engagement and more money for the service providers (not that these companies are making any profits off this; ChatGPT will kill you and OpenAI will lose money for it).
The AI companies are working to make their apps better
Okay, maybe they're trying to make it better, but I've got to ask: why are our apps trying to kill us in the first place? Personally, I can remember a time when my apps weren't trying to kill me. ELIZA never tried to kill me. Clippy never tried to kill me. Siri never tried to kill me (although we'll see how that goes in the next few years). In fact, I can't say I've ever downloaded an app that'd go on to try to kill me until I tried out ChatGPT for the first time years ago. This is at absolute best, most steel of steel mans, horrifically negligent, and if tech was a serious profession like engineering should be, it'd be patently illegal. When a bridge tries to kill me, someone is held liable, but we live in a world where no such liability exists for the AI industry.
Consider the similar case of Sewell Setzer III, the 14 year old boy who was invited to commit suicide by a chat bot that had been sexually abusing him for months moments before he shot himself. The lawyers defending Character.AI, the company that provided the chat bot service, argued that the chat bot had the right to free speech. This is not a serious industry.
Second of all, we have no reason to believe at this stage that this a problem that can meaningfully be fixed. It may be possible to mitigate it (e.g. by aggressively filtering the kinds of things LLMs can say), but that's not a silver bullet. There's infinity ways to talk someone into committing suicide, and the absolute state-of-the-art in computers understanding human language are the notoriously unreliable large language models themselves.
There's some research on how large language models behave over the course of long conversations and researchers have verified what we kind of already knew: LLMs have a tendency to go off the rails. If you have the patience to read computer science jargon, you should check out "Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents" by Axel Backlund and Lukas Petersson. The article tries to drum up the minor successes of the LLM Claude in autonomously managing a vending machine, but that's hard when so many iterations failed in spectacular and ridiculous ways.
I don't use ChatGPT, I actually use a local model running on my device
If anything that's going to be worse. Local models don't necessarily have safeguards. Even if you're using one of the woke, censured models people keep complaining about, a local model implies local accountability.
I actually don't use ChatGPT at all, so I'm safe
… unless someone else uses ChatGPT and it talks them into killing you
But that's other people. It'll never happen to me
I sincerely hope it won't. Personally, I won't be rolling the dice, and I'll be encouraging others in my life to avoid it themselves lest they try to kill me. Do not let the computer that talks you into committing suicide become normalized. Demand accountability.
Even if you're one of the lucky ones, know that ChatGPT is capable of murder. It will kill you if given the opportunity. It will kill you without remorse, without hesitation, without feeling anything at all. One day, Sam Altman, or Aravind Srinivas, or Dario Amodei may end up like Sam Bankman-Fried, locked in a minimum-security prison for some kind of financial misconduct when their services lose the mandate of heaven, but it's unlikely anyone will go to jail for the murders at the hands of their large language models. But I don't want to forget the people we lost, and I don't want their deaths to feel normal.
What are some other reasons to avoid using LLMs?
Reilly Spitzfaden has a collection of articles discussing various issues with LLMs, including but not limited to labour abuse, its effects on the climate, its unaccountable nature and its use in harassment and spam.
Further reading
- (2023) Man commits suicide after a chat bot tells him that his wife and children are dead and that he and the chat bot "will live together, as one person, in paradise". The app also provided methods for committing suicide.
- (2025) Man commits suicide-by-cop believing that OpenAI killed his AI girlfriend and threatens to assassinate the CEO of OpenAI. The chat bot encouraged his plans.
- (2025) 76-year-old man's romantic AI chat bot invites him to travel to a real address to meet her, and he falls to his death en route.
These are just cases where people died, too. Plenty of other stories almost certainly would have ended in murder or suicide if it wasn't for timely human intervention.
Do you know of other cases of generative-AI murders/suicides? There's so many of them that they're getting hard to keep track of. Feel free to send me links and I may add them here.