close
close

The lawsuit alleges that the 14-year-old caused the AI ​​chatbot. Here’s how parents can help protect their children from new technologies

The lawsuit alleges that the 14-year-old caused the AI ​​chatbot. Here’s how parents can help protect their children from new technologies

The mother of a 14-year-old Florida boy is suing an artificial intelligence chatbot company following the death of her son, Sewell Setzer III. suicide— she claims it’s because of his relationship with the AI ​​bot.

“Megan Garcia seeks to prevent C.AI from doing to any other child what he did to her child,” the 93-page wrongful death lawsuit reads. claim which was filed this week in US District Court in Orlando against Character.AI, its founders and Google.

Tech Justice Legislative Project Director Mithali Jain, who represents Garcia, said in a press release about the case: “By now, we are all familiar with the dangers posed by unregulated platforms developed by unscrupulous technology companies, especially to children. But the damage revealed in this case is new, novel and, frankly, terrifying. In the case of Character.AI, the deception is the design, and the platform itself is the predator.”

Character.AI released a statement through Xstating, “We are heartbroken by the tragic loss of one of our users and want to extend our deepest condolences to the family. As a company, we take the security of our users very seriously and continue to add new security features, which you can read here: https://blog.character.ai/community-safety-updates/….”

Garcia alleges in the lawsuit that Sewell, who took his own life in February, was implicated in the crime addictivemalicious technology without any protection, resulting in extreme personality changes for the boy, who seemed to prefer the bot to other connections to real life. His mother claims the “violent and sexual intercourse” took place over a period of 10 months. A boy committed suicide after a bot told him: “Please come home to me as soon as possible, my love.”

on friday New York Times Reporter Kevin Roose discussed the situation on his own The Hard Fork Podcastplaying a segment of the interview for which he interviewed Garcia his article who told her story. Garcia didn’t learn the full extent of her relationship with the bot until after her son’s death, when she saw all the messages. In fact, she told Roose, when she noticed Sewell getting his phone tapped a lot, she asked him what he was doing and who he was talking to. He explained that it was “just an AI bot … not a human,” she recalled, adding, “I was relieved, like, OK, it’s not human, it’s like one of his little games.” Garcia didn’t fully understand the bot’s potential emotional power — and she’s far from alone.

“It’s not visible to anyone,” said Robbie Torney, the CEO’s chief of staff Common sense media and lead author a new guide on AI companions aimed at parents who are constantly struggling to keep up confusing new technology and to create boundaries for the safety of their children.

But AI companions, Torney points out, are different from, say, the help desk chatbot you use when you’re trying to get help from a bank. “They are designed to perform tasks or respond to requests,” he explains. “Something like an AI character we call a companion and is designed to establish a relationship or simulate a relationship with the user. And that’s a completely different use case that I think parents need to be aware of.” That’s evident in Garcia’s lawsuit, which includes a horribly flirtatious, sexual, realistic text exchange between her son and the bot.

According to Thorney, it’s especially important for parents of teens to sound the alarm about AI companions because teens, especially male teens, are particularly susceptible to over-reliance on technology.

Here’s what parents need to know.

What are AI companions and why do children use them?

According to the new The Ultimate Parent’s Guide to AI Companions and Relationships from Common Sense Media, created in partnership with mental health professionals Stanford Brainstorming Labartificial intelligence companions are “a new category of technology that goes beyond simple chatbots.” They are specifically designed to, among other things, “simulate emotional bonds and close relationships with users, remember personal details from past conversations, play the roles of mentors and friends, simulate human emotions and empathy, and “make it easier to agree with the user, than with the user.” typical AI chatbots,” according to the guide.

Popular platforms include not only Character.ai, which enables more than 20 million users to create and interact with text-based companions; Replika, which offers text or animated 3D companions for friendship or romance; and others including Kindroid and Nomi.

Children are drawn to them for a variety of reasons, from unbiased listening and 24/7 availability to emotional support and an escape from real social pressures.

Who is at risk and what is the concern?

Common Sense Media warns that teenagers are most at risk, especially those suffering from “depression, anxiety, social difficulties or isolation”, as well as men, young people going through major life changes and anyone without real-life support systems the world .

This last particularly worried Raffaele Ciriello, a senior lecturer in business information systems at the University of Sydney’s business school, who researched how “emotional” AI challenges the human essence. “Our research reveals the paradox of (de)humanization: by humanizing AI agents, we may inadvertently dehumanize ourselves, leading to an ontological blurring of human-AI interactions.” In other words, Ciriello writes in a recent article for Conversation with PhD student Angelina Ying Chen: “Users can become deeply emotionally attached if they believe their AI companion truly understands them.”

Another studythis one from the University of Cambridge and focused on children, found that AI chatbots have an “empathy deficit”, leaving young users who tend to treat such companions as “realistic, quasi-human confidants” particularly vulnerable risk of harm. .

Because of this, Common Sense Media highlights a list of potential risks, including that companions can be used to avoid real human relationships, can create particular challenges for people with mental or behavioral problems, can increase loneliness or isolation, create the potential for inappropriate sexual content in nature, can be addictive and addictive, a frightening reality for those experiencing “suicidality, psychosis, or mania.”

How to recognize red flags

According to the guide, parents should pay attention to the following warning signs:

  • Prefers interaction with AI companions over true friendship

  • Spending hours alone talking with a companion

  • Emotional stress when it is impossible to access a satellite

  • Sharing deeply personal information or secrets

  • Developing romantic feelings for an AI companion

  • Dropping grades or school attendance

  • Withdrawal from social/family activities and friendships

  • Loss of interest in previous hobbies

  • Changes in sleep patterns

  • Discussing problems exclusively with an AI partner

Consider getting your child professional help, Common Sense Media suggests, if you notice your child withdrawing from real people in favor of AI, showing new or worsening signs of depression or anxiety, becoming overly defensive about using an AI companion, exhibiting significant changes in behavior or mood, or expressing thoughts about self-harm.

How to protect your child

  • Set boundaries: Set a specific time to use AI Companion and do not allow unsupervised or unlimited access.

  • Spend time offline: Encourage friendships and activities in the real world.

  • Check regularly: Monitor the chatbot’s content as well as your child’s level of emotional attachment.

  • Talk about it: Communicate openly and non-judgmentally about your experiences with AI, while heeding the warnings.

“When parents hear their kids say, ‘Hey, I’m talking to an AI chatbot,’ that’s really an opportunity to trust and get that information instead of thinking, ‘Oh, okay, you’re not talking to a human,'” says Thorny. Instead, he says, it’s a chance to learn more, assess the situation and be prepared. “Try to listen with compassion and empathy, rather than thinking that it’s safer just because it’s not human,” he says, “or that you don’t need to worry.”

If you need immediate mental health support, contact 988 Suicide and crisis hotline.

More about children and social networks:

This story was originally presented on Fortune.com