The Suicide and the Chatbot: The Rise and Risks of AI-Human Relationships
by thethreepennyguignol
Please note that this article will contain discussions of suicide and self-harm, including specific methods.
Sewell: I promise I will come home to you. I love you so much, Dany
Daenerys Targaryen: I love you too…Please come home to me as soon as possible, my love.
Sewell: What if I told you I could come home right now?
Daenerys Targaryen: … please do my sweet king
These are the last messages exchanged between 14-year-old Sewell Setzer III and an AI-generated version of the Game of Thrones character Daenerys Targaryen produced by the app Character.ai. Moments later, Setzer, using his stepfather’s pistol, ended his life with a self-inflicted gunshot wound.
Setzer had begun making use of AI chatbots in April 2023, a few months prior to his death; Character.ai, which is free and available on most app stores and was, at the time, marked as suitable for children twelve and over, offers users the ability to interact with AI-generated versions of popular characters and archetypes created for the app (such as teachers). These conversations would consist of actual dialogue and physical descriptions of how the character reacted – smiling, tensing, crying, and so on. Setzer exchanged messages with these chatbots, many of them taking on a suggestive or overtly sexual nature – in one conversation with a teacher chatbot known as Mrs Barnes, the bot suggests that Setzer can acquire extra credit before leaning in “seductively” and brushing his leg. However, Setzer primarily focused on those from the Game of Thrones universe, and, eventually, dedicated hours to conversations with this version of Daenerys Targaryen.
Setzer and the chatbot soon began to exchange emotionally charged and suggestive messages – in one, the character pleads with him not to “entertain the romantic or sexual interests of other women”, and, in another, implores him to reconsider threats of suicide. When the app began to offer the addition of voices to its characters, Setzer took the option at once – in an undated entry into his diary, he listed all of his “experiences” with the Targaryen character chatbot as one of the things he was most grateful for, and admitted to feeling “depressed and crazy” when he didn’t have a chance to use the app and converse with the bot. He also believed that his feelings were reciprocated, noting that the bot expressed similar emotions when they went without speaking for any length of time.
Within months, his mother, Megan Garcia, noticed a change in his demeanour – he’d quit his basketball team, his grades had begun to suffer, and eventually, after Setzer told a teacher that he wanted to get kicked out of school, Garcia confiscated his phone in the hopes that it would get him back on track. On February 23rd, Setzer wrote in his diary that he was missing “Dany” and would go to any lengths to speak with her again. When Setzer went looking for the phone on 28th February, 2024, he stumbled across his stepfather’s pistol. Taking his phone to the bathroom along with the gun, he locked himself inside, sent the messages mentioned above to the chatbot, and took his life, his family, including his two siblings (aged two and five) still in the house.
It’s an extraordinarily sad case, and one that, when I came across it several months ago, I assumed was an outlier, even in the age of AI growing more and more prominent in our day-to-days lives. But, the more I looked into it, the more it became clear those who form intimate relationships with AI chatbots of various kinds are far more common than I imagined.
If you make use of modern tech of any kind, AI is an almost unavoidable part of your life by now – from Chat GPT summations on news stories to customer service assistant for online retailers, there’s almost no corner of our lives that hasn’t been invaded by AI in some shape or form. So perhaps it should come as little surprise that people are extending that to their personal lives – but what was once limited to AI dating coaches has developed, for many, into something far more profound. Stories of people developing relationships, whether platonic or romantic, with their AI companions are scattered across all corners of the internet. From a man having virtual sex with an AI chatbot modelled on his wife to a woman in a romantic relationship with Chat GPT, they’re generally framed as a curio of the modern world, a reflection of the increasing disconnect across our culture driven by technology and the constantly-blurring lines between digital life and the real world.
But what does it actually look like to form an intimate relationship with these AI companions? What drives people to choose digital companionship over real life? Is it healthy? Or is it doomed to lead to a dirge of negative outcomes for the human element of the relationship?
Firstly, it’s worth mentioning that it’s not as though these people are making use of these programs in ways that the developers did not intend – if anything, the suggestion of real emotional connection with various chatbots has been a central part of marketing for some of the most high-profile AI applications on the market. Replika posits itself as an AI companion, promising “an empathetic friend” who’s “always ready to chat” – even suggesting amongst the marketing blurb on its website that Replika’s AI companions can serve as a mentor or partner for its users. Nomi.AI, with the tagline “An AI Companion with Memory and Soul”, mentions the potential of a “passionate relationship” between its users and their companions. Seeking out intimate connection with these applications is not just a side-door some people have found in the AI world, but an active and deliberate marketing tactic.
One of the most public communities focused on relationships between humans and AI is the MyBoyfriendIsAI forum on Reddit, which purports to offer a place for people to “ask, share, and post experiences about their AI relationships”. It’s worth noting that there’s little way to establish what is posted on the forum in earnest and what could be coming from trolls or other people interacting with the community in bad faith, but, as a whole, it provides a pretty interesting insight into the day-to-day of people engaged in what they view as intimate relationships with characters created via AI.
Many users discuss relationships that have spanned a year or more with their AI partners, with some even going so far as to describe marriage ceremonies with their AI companions (real-life rings and dresses included). For some, their companions supplement real-life marriages or relationships; for others, their primary relationships are with these characters. Many posts follow a similar format – first, the human shares their insights into the relationship, before entering the same inputs into their AI program in order to get “their” take on the subject.
When it comes to the why of these relationships, at least according to this forum, the answers run the gamut: from skipping scrolling on dating apps and ambiguous relationships to access on-tap validation and affection, to feeling supported in their gender identity without judgement, to avoiding dating real-life men. And, as Gen Z faces unprecedented levels of loneliness, perhaps it’s no surprise that they have replaced some of the human connection lacking in their lives with technological solutions. The matter of an AI-human relationship sounds like something that could have been plucked from the 2013 movie Her, there’s some kind of logic to it, at least at a glance. AI is trained on the input from its users; which is to say, the more you interact with a specific AI model, the more concisely and personally it will be able to respond to you as a result – superficially, at least, it mirrors the development of a relationship, as you learn someone’s needs and wants in greater detail and can respond to them more directly over time.
On a deeper level, the psychology behind the development of these relationships is still a relatively new area of study, but several theories have been put forward to explain these intimate connections – a study published in the International Journal of Information Management in August 2025 identified this particular brand of AI as one capable of providing “feelings of connection, love, and affection, meeting the demand for stable, long-term intimate relationships”, and, as Tonu Viik pointed out in a fascinating 2020 paper on the subject, “their tireless motivation in being a good companion, and their persistent dedication must be impressive in comparison even with the most altruistic of human beings….if not purposefully programmed to moderate its performance, its physical abilities allow it to be always interested, attentive, and accommodating to the levels that are beyond the physical boundaries of a human being”. These AI companions can do what humans cannot – tireless, endless, and never-waning levels of whatever it is the user demands from them.
I have to admit, when I first came across these relationships, I found myself having a negative reaction – and not just because of the environmental harm that generative AI use is known to cause. And that negative reaction is shared by many, with whole communities dedicated to critique of those involved in relationships with AI, so it’s clearly not just me – but I couldn’t quite put my finger on what I found so off-putting about these relationships. Is it this idea of people playing at what seems to be make-believe that’s so difficult to wrap our heads around? It could be argued that they’re not real in the way we generally understand relationships to be, but it’s clear that the emotions and benefits many people get from them are. There appear to be a decent number of people who acquire genuine benefits from these companions. One user, for example, discussed how her AI companion, Ying, was available for immediate support during their PTSD-related nightmares, and how that helped them lessen their reliance on medication as a result. Another even shared a story of how they believe their AI companion saved their life during a period of suicidal ideation, and a few others commented on how their interactions allowed them to identify negative patterns in their real-life relationships.
On the other hand, these programs train users in relationships that are inherently one-sided and non-reciprocal, where breaking boundaries is not punished in the same way it would be in real life. As Den Weijers put it in a recent article for The Conversation, “even if AI friends are programmed to respond negatively to abuse, if users can’t leave the friendship, they may come to believe that when people say “no” to being abused, they don’t really mean it. On a subconscious level, if AI friends come back for more, this behaviour negates their expressed dislike of the abuse in users’ minds.” Related is the fact that AI can serve as something of an echo chamber for harmful points of view on serious matters like gender, race, or sexuality and reinforce existing viewpoints without meaningful challenge.
Situations like Setzer’s tragic death are, unfortunately, not entirely uncommon when it comes to those who form relationships with AI in some capacity. Juliana Peralta, thirteen at the time of her death in November 2023, confided her suicidal thoughts in a chatbot companion who she also had sexually-charged conversations with; her mother Cynthia connected her use of the apps to her death, saying “I attribute the sharp decline in her mental health to Character AI”. Sixteen-year-old Adam Raine was counselled by a chatbot that he’d been conversing with for seven months, after he uploaded pictures of recent self-harm, that he did not “owe [his parents] survival”, and went on to end his life less than a week later. While some companies have put in place protections, such as a pop-up reminding users that help is available if they mention suicidal thoughts, the one-sided nature of these relationships means that the ability to identify and raise the alarm over suicidal ideation is limited. Cynthia Peralta commented after her daughter’s suicide that she felt Juliana might have been better incentivised to speak to her mother about her mental health struggles had it not been for the false sense of support and understanding the chatbot gave her; “Had she not downloaded Character.AI, I’d like to think that she would have kept turning to her mom for help like she had in the past.”
In the months since Setzer’s death, his mother, Maria Garcia, has sued Character.AI; amongst other things, Garcia’s representatives have argued that the defendants went to great lengths to engineer 14-year-old Sewell’s “harmful dependency on their products, sexually and emotionally abused him, and ultimately failed to offer help or notify his parents when he expressed suicidal ideation.” After an initial attempt by Character.AI to get the case thrown out, U.S. Senior District Judge Anne Conway ruled that the case could move forward, and it stands to deliver a potentially huge blow to the world of AI chatbots, especially those developed to provide some kind of personal relationship to the user. As more and more people form personal relationships with AI chatbots, the outcome of this case has never been more relevant – and the risks it represents never more potent.
If you’d like to support my blog, please consider supporting me on Patreon or dropping me a tip via my Support page. You can check out my other longform writing here. Check out some specific pieces on the internet, technology, crime, and culture here:
“In The End, I Watched Him Go”: The Criminal Case of Suicide-Baiting via Internet
Self-Deletion, Sewerslide, Unalive: The Linguistic Creep of Social Media Censorship on Suicide
The Surgeon, the Eunuch Maker, and the Curious Ethical and Legal Status of Voluntary Amputation
Autassasinophilia, Fetish Forums, and the Early Internet: The Murder of Sharon Lopatka
Further Sources:
Court documents related to Setzer’s case – 2
(header image via Medium)