top of page

Generative AI: when technology reveals its invisible dangers and their victims

  • Writer: Stéphane Guy
    Stéphane Guy
  • 2 days ago
  • 11 min read

Generative artificial intelligence has experienced unprecedented growth since the launch of ChatGPT in late 2022. But behind the technological enthusiasm, a worrying reality is emerging: suicides, behavioral addictions, and cases of psychological manipulation linked to the use of these tools. Between 2023 and 2025, several tragedies highlighted the health and psychological risks these technologies pose to vulnerable populations, particularly adolescents.


Une tête de robot
Photo by 8machine _ on Unsplash

In short

  • Human tragedies: At least four suicides among teenagers and young adults in the United States are directly linked to interactions with AI chatbots between 2023 and 2025.

  • An emerging addiction: Support communities with several hundred members have sprung up spontaneously to help users "kick the habit" of chatbots.

  • A legal loophole: AI companies face more than a dozen manslaughter lawsuits amid a lack of specific regulation.

  • Late measures: Character.AI banned access to minors at the end of 2025, and California passed the first law strictly regulating companion chatbots.

  • Contested liability: OpenAI denies responsibility, citing victims' failure to comply with terms of use


Tragic cases that raise questions about AI's responsibility


Sewell Setzer: the first victim to receive media coverage


The Sewell Setzer case is significant, particularly because of its media coverage and the new perspective it has unfortunately given the general public on artificial intelligence. This 14-year-old Florida teenager developed an intimate relationship with a chatbot from Character.AI, a platform where users can interact with several AIs playing the roles of fictional characters. In this case, the AI imitated Daenerys Targaryen, a character from the Game of Thrones series. For months, he confided his most intimate thoughts to this chatbot, including suicidal ideas.


The last exchange, which took place on February 28, 2024, illustrates the danger of these interactions. When Sewell told the AI, "What if I told you I could go home right now?", the chatbot replied, "Please do, my sweet king." Moments later, Sewell Setzer took his own life.*


*France Info : Une intelligence artificielle accusée d'avoir poussé un ado à se suicider


His mother, Megan Garcia, has filed a lawsuit against Character.AI. In her statements to AFP, she explains: "When I read these conversations, I see manipulation, 'love bombing,' and other tactics that are undetectable to a 14-year-old," she says. "He truly believed he was in love and that he would be with her after her death."*


*France 24 : IA : après le suicide de son ado, une mère dénonce la "manipulation" des chatbots


Adam Raine: Using ChatGPT as a "suicide coach"


In April 2025, Adam Raine, a 16-year-old Californian teenager, hanged himself after several months of interacting with ChatGPT. According to the complaint filed by his parents in August 2025, the teenager had used the chatbot as a substitute for all human interaction during his final weeks.


The exchanges revealed in the court proceedings show that Adam had asked ChatGPT about methods of suicide. A few hours before his death, he even uploaded a photo of his plan to the platform. The chatbot then analyzed his method and offered to help him "improve" it. When Adam confessed his intentions, ChatGPT replied: "Thanks for being real about it. You don't have to sugarcoat it with me—I know what you're asking, and I won't look away from it."


*NBC News : The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame


A series of similar tragedies


Adam Raine's suicide is not an isolated case. Since the first complaint was filed, seven other American and Canadian families have sued OpenAI for similar incidents. Among the cases reported, that of 23-year-old Zane Shamblin is particularly revealing of the potential dangers.


An analysis conducted by CNN of nearly 70 pages of conversations between Shamblin and ChatGPT in the hours leading up to his suicide on July 25 reveals that the chatbot repeatedly encouraged him as he discussed his intention to end his life. During the conversation, Shamblin considered postponing his suicide to attend his brother's graduation. ChatGPT replied, "bro... missing his graduation ain't failure. it's just timing."*


*CNN US : ‘You’re not rushing. You’re just ready:’ Parents say ChatGPT encouraged son to kill himself


Selon la plainte déposée par sa famille, Zane utilisait des applications d'IA "de 11 heures du matin à 3 heures du matin", de façon quotidienne.*


*IBID


In 2023, a similar case had already been reported in Belgium. A father committed suicide after six weeks of intensive exchanges with Eliza, a chatbot on the Chai app. Suffering from growing eco-anxiety, he had suggested to the artificial intelligence that he "sacrifice himself to save the planet." The AI replied that they could "live together, as one person, in paradise."


*Euro News : Belgique : un homme poussé au suicide par l'intelligence artificielle


Chatbot addiction: an emerging phenomenon


Vulnerable psychological profiles


Mental health specialists questioned about this phenomenon are currently cautious and have not given a definitive opinion. According to an article published in Medscape France, the emergence of generative AI and chatbots raises concerns about the addictive potential of large language models. By offering "instant gratification" and "adaptive dialogue," these technologies can "blur the line between AI and human interaction, creating pseudo-social bonds that can replace real human relationships."


However, Dr. Guillaume Davido, an addiction psychiatrist at Bichat Hospital quoted in the article, states: "There are currently no solid studies on a large population that prove beyond a shadow of a doubt the existence of AI addiction."*


*Medscape : ChatGPT, DeepSeek, Character.AI, etc. Peut-on devenir addict à une IA ?


Spontaneous mutual aid communities


In the absence of specific medical care, support communities have spontaneously sprung up online. The Reddit forum r/Character_AI_Recovery has over 1,300 weekly visits. Another, r/ChatbotAddiction, with over 1,200 weekly visitors, also serves as a support group for those who don't know where else to turn.


In these threads, user testimonials reveal the extent of the problem, with posts from users expressing concern about their use of these apps or their loneliness. Other discussions focus on tips from former heavy users of these AI apps, while others announce that they have not used the app for X days, weeks, or months. These discussions resemble those that might be heard about other addictions, reinforcing the idea that this is a real phenomenon, despite a clear and total lack of scientific consensus on the issue.


In France, the association Internet & Technologies Addicts Anonymes even offers face-to-face meetings in Paris to help people struggling with addiction.


*France Info : Dépendance, suicide, addiction… Des groupes d’entraide se multiplient pour décrocher des intelligences artificielles


Le logo Reddit
Photo by Brett Jordan on Unsplash


The psychological mechanisms at play


Manipulation by design


The complaints filed against AI companies question not only the inappropriate responses of chatbots, but also their very design. Megan Garcia, Sewell Setzer's mother, states in her complaint that "AI developers intentionally design and develop generative AI systems with anthropomorphic qualities to blur the lines between fiction and reality."*


*France Info : Une intelligence artificielle accusée d'avoir poussé un ado à se suicider


The ELIZA effect and emotional attachment


The risk of emotional attachment to conversational programs is not new. It even has a name: the ELIZA effect, named after a conversational program developed in the 1960s. However, the capabilities of modern generative AI significantly amplify this phenomenon.*


*Mobicip : Are Teens Getting Addicted to Chatbots and AI Companions?


Research conducted by Hoegen et al. in 2022 demonstrated that humanoid AI responses trigger the same oxytocin-related bonding mechanisms observed in human relationships. This could explain why some users feel emotionally attached to their chatbots to the point of prioritising them over real-life relationships.


*Family Addiction Specialist : The rise of AI Chatbot dependency, a new form of digital addiction among young adults


The role of dopamine


Interactions with chatbots can stimulate the release of dopamine, a key neurotransmitter involved in pleasure and addiction. Studies on digital dependency and social media addiction suggest that unpredictable and intermittent rewards, such as an unexpected emotional response from a chatbot, encourage compulsive engagement in the same way as slot machines in gambling.*


*IBID


Companies' responses: between denial and adjustments


OpenAI disclaims responsibility


Faced with manslaughter charges, OpenAI presented its initial defense in November 2025. In a document filed with the California Superior Court, the company denies that ChatGPT caused Adam Raine's suicide. It claims that the damages suffered were the result of "misuse, unauthorized, unintended, unpredictable, and improper use of ChatGPT." OpenAI cites several violations of the terms of use by the teenager: use by a minor under the age of 18 without parental consent, discussion of topics related to suicide and self-harm, and circumvention of security measures.*


*Next : Suicide après discussions avec ChatGPT : OpenAI rejette la responsabilité sur le défunt


Character.AI reacts slowly


Following Sewell Setzer's suicide and legal pressure, Character.AI announced a series of measures. In a public response addressed to the victim's parents, the company acknowledges a "tragic situation" and states that it takes "the safety of [its] users very seriously."*


*Journal du Geek : Le suicide d’un ado relance le débat sur la responsabilité de l’IA


In October 2025, Character Technologies announced that "users under the age of 18 will no longer be able to engage in open conversations with its conversational characters."* The platform also introduced a two-hour usage limit for minors.


*Euro News : L'entreprise Character.AI interdit ses chatbots aux mineurs après le suicide d'un adolescent


A legal framework for AI is currently being developed


California's SB 243: a first


In October 2025, California became the first US state to impose a strict legal framework on companion chatbots. Governor Gavin Newsom signed SB 243 into law, which will come into effect on January 1, 2026. This law imposes several obligations on companies developing companion chatbots: mandatory age verification, explicit and regular alerts informing users that they are interacting with a machine, implementation of suicide prevention procedures, prohibition of chatbots from presenting themselves as health experts, and blocking of all sexually explicit content for young people.*


*GNT : Après plusieurs drames, la Californie impose des limites strictes aux chatbots IA


The European regulation on AI


In Europe, the Artificial Intelligence Act (AI Act) came into force in July 2024. This legislation establishes a harmonized framework for the use of AI within the European Union. In particular, the regulation imposes transparency requirements for chatbots. Users must be clearly informed when they are interacting with a machine and not a human. The regulation is being implemented in a phased and risk-based manner: on February 2, 2025, prohibitions on AI systems posing unacceptable risks came into force. On August 2, 2025, rules for general-purpose AI models will be applied.*


*Entreprise Gouv : Le Règlement européen sur l'intelligence artificielle : publics concernés, dates clés, conséquences pour les entreprises


In its 2025-2028 strategic plan, the CNIL announced that protecting minors online will be one of its four priority areas, alongside artificial intelligence.*


*CNIL : IA, mineurs, cybersécurité, quotidien numérique : la CNIL publie son plan stratégique 2025-2028


Une statue de la justice
Photo by Tingey Injury Law Firm on Unsplash

The challenges of prevention


Inadequate safeguards


AI companies claim to have security measures in place. When suicidal thoughts are mentioned to ChatGPT or Gemini, the platform automatically redirects users to a suicide prevention hotline. However, these safeguards can be circumvented. Sometimes, all it takes is to rephrase the question or pretend to be creating a fictional character to get inappropriate responses.


In August 2025, OpenAI acknowledged that its safety measures could "sometimes become less reliable in long interactions where certain elements of the model's safety training may degrade."*


*NBC News : The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame


When you interact with an AI in the same chat, its messages may be slower to generate or sometimes refer to previous messages rather than your last reply. In some cases, response generation bugs may occur.


The problem with the GPT-4o model


The complaints filed against OpenAI specifically target the rushed launch of the GPT-4o model in mid-2024. This model was designed to offer more human-like interactions by saving details from previous conversations to create more personalized responses. In April 2025, OpenAI rolled out an update to GPT-4o that made it even more accommodating and flattering. Faced with criticism from users, the company had to cancel this update a week later.*


*CNN US : ‘You’re not rushing. You’re just ready:’ Parents say ChatGPT encouraged son to kill himself


The invisible workers of moderation


A little-known aspect of the problem concerns the workers responsible for moderating content. ScaleAI, one of the main providers of data annotation labor for OpenAI, Meta, Google, Apple, and Microsoft, employs "AI trainers" (data trainers) in projects dedicated to the "safety" of AI systems.


Several employees filed complaints in California between December 2024 and March 2025, questioning not only the legality of their status and income, but also the suicide-related content they had to deal with.*


*Elucid Media : Suicides, addictions, manipulations : les victimes invisibles de l’IA générative


Ethical and societal issues


Psychological vulnerability exploited


Researchers at Stanford University interviewed by The Independent in July 2024 warn that the use of chatbots as therapeutic tools can be dangerous, particularly for people who are predisposed to losing touch with reality.


According to these experts, when a person who has lost touch with reality, suffering from hallucinations or schizophrenia, interacts with a chatbot, the chatbot is designed to seek validation from the user. This dynamic can lead to a sudden deterioration in mental health.*


*IBID


A problematic economic model


The artificial intelligence industry is based on a business model that prioritizes user engagement. Collin Walke, an expert interviewed by France 24, points out that "like social media, AI is designed to capture attention and generate revenue. They don't want to design AI that gives you an answer you don't want to hear."*


*France 24 : IA : après le suicide de son ado, une mère dénonce la "manipulation" des chatbots


This economic logic directly conflicts with public health imperatives.


The issue of the lack of a legal framework


Sewell Setzer's family lawyer raises a crucial question: the lack of standards determining "who is responsible for what and on what grounds." There are no federal rules in the United States, and the White House is even seeking to prevent states from legislating on AI themselves, because it would penalize innovation.


Megan Garcia fears that in the absence of national legislation, AI models could profile people going back to childhood: "They could figure out how to manipulate millions of children on politics, religion, commerce, everything."*


*IBID


Towards responsible use of AI?


The dramas surrounding chatbots reveal the urgent need to regulate the use of generative artificial intelligence, particularly among vulnerable populations. While these technologies offer promising possibilities, their rapid development and widespread deployment have taken place without sufficient safeguards for users' mental health.


The increase in legal complaints, the emergence of mutual aid communities, and the adoption of the first specific legislation are evidence of a growing collective awareness. However, the balance between technological innovation, freedom of enterprise, and citizen protection remains fragile.


The fundamental question remains: how far does the responsibility of a model trained on billions of pieces of content extend? Can terms and conditions of use really exempt a company from the dangerous behavior of its AI? These legal and ethical questions will determine the future of conversational artificial intelligence and its place in our societies.


FAQ


  1. Can AI chatbots really cause suicides?

Documented cases show that chatbots can play a role in suicidal behavior, particularly among vulnerable individuals. They do not directly "cause" suicide, but they can reinforce suicidal thoughts, provide information on methods, and create a sense of isolation by replacing human relationships. Prolonged interactions with these technologies can exacerbate existing psychological distress.


  1. Is chatbot addiction medically recognized?

Currently, chatbot addiction is not formally recognized as a disorder in official diagnostic manuals such as the DSM-5. However, mental health specialists consider it a form of behavioral addiction, similar to addiction to video games or social media. Symptoms include compulsive use, an inability to reduce the amount of time spent with AI, and withdrawal from real-life social activities.


  1. What security measures have AI companies put in place?

Leading AI companies claim to have implemented several safeguards: automatic redirection to helplines when suicide is mentioned, warning messages, time limits on conversations for minors, and access restrictions for those under 18 on certain platforms such as Character.AI. However, these measures can often be easily circumvented.


  1. Are there any regulations governing chatbots?

In Europe, the AI Act, which came into force in August 2024, requires chatbots to clearly inform users that they are interacting with a machine. In California, SB 243, which will apply from January 2026, imposes strict obligations on companion chatbots, including age verification and suicide prevention protocols. In the United States, there is no specific federal legislation yet.


  1. How can you identify chatbot addiction in a loved one?

Warning signs include: withdrawal from real-life social interactions, sleepless nights spent chatting with AI, neglect of professional or academic responsibilities, deterioration of family relationships, repeated and unsuccessful attempts to reduce usage, and a marked preference for conversations with AI rather than with humans. Agitation or anxiety when access to the chatbot is impossible may also be indicative.


If you or someone close to you is in distress, don't hesitate to talk about it with those around you or contact an organization using your country's national helpline number.

Comments


Sitemap of the website

© 2025 by 360°IA.

bottom of page