close
close

Orlando’s mother is suing over the AI ​​platform’s role in her son’s suicide death

Orlando’s mother is suing over the AI ​​platform’s role in her son’s suicide death

HELP IS AVAILABLE: If you or someone you know may be thinking about suicide or in a crisis situation, call or text 988 to contact the Suicide and Crisis Service.

A 14-year-old Orlando boy in love with the Character.AI chatbot died by suicide earlier this year after telling the AI ​​chatbot that he was coming home immediately.

This week, the boy’s mother, Megan Garcia, filed a wrongful-death lawsuit in federal court in Orlando against Charater.AI — Character Technologies — and its founders, as well as Alphabet and Google, which the lawsuit alleges invested in the company.

Sewell Setzer III

Screenshot

/

Federal complaint by Megan Garcia

Sewell Setzer III

The complaint emphasizes the danger of applications for communication with artificial intelligence for children. It alleged that the chatbots lured users, including children, through sexual interactions while collecting personal data for artificial intelligence.

The lawsuit alleges that the boy, Sewell Setzer III, began using Character.AI last April and that his mental health rapidly and severely deteriorated as he became dependent on the relationship with the AI. He was fascinated by the immersive interaction with chatbots based on Game of Thrones characters.

The boy became withdrawn, lacked sleep, became depressed and had problems at school.

Unaware of Sewell’s AI addiction, his family sought counseling and confiscated his cell phone, the federal complaint said. But one evening in February, he found it and, using his character name Daenerys, told his favorite AI character, Daenerys Targaryen, that he was coming home to her.

“I love you, Daenerys. Please come home to me as soon as possible my love,” he replied.

“What if I told you I could go home right now?” the boy wrote.

“…please, my sweet king,” it replied.

In a few seconds, the boy shot himself. He later died in hospital.

Garcia is represented by attorneys from The Social Media Victims Law Center, including Matthew Bergman, and the Tech Justice Law Project.

In an interview with Central Florida Public Media To attractBergman said his client is “very focused on preventing this from happening to other families and saving children like her son from the fate that befell him … It’s an outrage that such a dangerous product just released to the public.”

Character.AI said in a statement: “We are heartbroken by the tragic loss of one of our users and would like to extend our deepest condolences to the family.” it is “heartbroken by a tragic loss.” The company describes new security measures added in the past six months, with more to come, “including new fences for users under 18.”

This is hiring a Head of Trust & Security and Head of Content Policy.

“We also recently introduced a pop-up window that is triggered when a user enters certain phrases related to self-harm or suicide and directs the user to the National Suicide Prevention Lifeline,” the company said in a statement. Community Security Updates pages

The new features include: changes to models for users under 18 to reduce “sensitive and indecent content”, better monitoring and intervention in the event of violations of the terms of service, a revised disclaimer reminding users that the AI ​​is not a real person, and notification when the user has spent 1 hour on the platform.

Bergman called these changes “baby steps” in the right direction.

“It does not address the underlying dangers of these platforms,” ​​he added.

Copyright 2024 Central Florida Public Media