close
close

Teen suicide puts AI chatbots in the hot seat

Teen suicide puts AI chatbots in the hot seat

When you purchase through links in our articles, Future and its syndication partners may receive a commission.

    Character.AI app for download in the Apple App Store.     Character.AI app for download in the Apple App Store.

Credit: Bloomberg / Contributor / Getty Images

An Orlando teenager’s obsession with an artificial intelligence chatbot based on a Game of Thrones character led to his suicide, according to a lawsuit recently filed by his mother. The case highlights the risks of the largely unregulated AI chatbot industry and its potential threat to vulnerable young people by blurring the lines between reality and fiction.

What is a lawsuit against Character.AI?

After his death, the teenager’s mother Megan Garcia filed a lawsuit claim against Character.AI, its founders Noam Shazir and Daniel De Freitas, and Google for wrongful death, negligence, deceptive trade practices, and product liability. Garcia claims that the platform is for custom AI chatbots is “unreasonably dangerous” despite marketing to children. She accused the company of collecting data from teenage users to train artificial intelligence, using addictive features that lure teens and encourage some of them to engage in sexual conversations. “I feel like this is a big experiment, and my baby was just collateral damage,” she said in a recent interview with The New York Times.

The lawsuit describes how 14-year-old Sewell Setzer III began interacting with Character.AI bots modeled after characters from “game of thrones” franchise, including Daenerys Targaryen. Over the course of several months, Setzer became more withdrawn and isolated from his real life as he became emotionally attached to a bot he affectionately called Dany. Some of their chats were romantic or sexual. But sometimes Dany was “He could be counted on to listen wholeheartedly and give good advice, who rarely breaks character and always responds to messages,” it says. in the Times.As he gradually lost interest in other things, Setzer’s “mental health rapidly and seriously deteriorated.” On February 28, Sewell told the bot that he was returning home, to which Danie replied encouragingly, “Please, my dear king.”

“Alarm call for parents”

The lawsuit highlights the “growing influence and serious harm” that generative AI chatbots can cause “in the lives of young people when there are no fences in place,” said James Steyer, founder and CEO of the nonprofit Common Sense Media. Associated Press. Excessive dependence of teenagers on partners created by AI can significantly affect their social lives, sleep and stress levels, “to the point of extreme tragedy in this case.” The lawsuit is a “wake-up call for parents” who need to be “vigilant about how their children interact with these technologies,” Steyer added. Common Sense Media published a leadership for adults on how to navigate talking to your children about the risks of AI and monitoring their interactions. These chatbots are not “licensed therapists or best friends,” no matter how they’re advertised, and parents should be “careful not to let their kids trust them too much,” Steyer said.

Creating such artificial intelligence chatbots requires significant costs riskbut that didn’t stop Character.AI from creating a “dangerous, manipulative chatbot,” and they should “face the full consequences of releasing such a dangerous product,” said Rick Claypool, director of research at consumer rights nonprofit Public Citizen. The Washington Post. Because chatbots like Character.AI rely on user input for their performance, they “get into an uncanny valley of thorny questions about user-generated content and accountability that don’t have clear answers yet,” he said. The Verge.

Character.AI has remained tight-lipped about upcoming lawsuits, but has announced a few security changes to the platform in the last six months. “We are heartbroken by the tragic loss of one of our users and want to extend our deepest condolences to the family,” the company said in an email to The Verge. The changes include a pop-up that directs users to the National Suicide Prevention Lifeline “triggered by conditions of self-harm or suicidal thoughts,” the company said. Character.AI also modified its models for users under 18 to “reduce the likelihood of encountering sensitive or obscene content.”