close
close

Teen companion with artificial intelligence: how to protect your child

Teen companion with artificial intelligence: how to protect your child

For parents, generative ones are still catching up artificial intelligencerise in chatbot companion may still be a mystery.

Broadly speaking, the technology may seem relatively innocuous compared to other threats teenagers may face online, including financial sexual exploitation.

Using AI-based platforms such as Character.AI, Replika, Kindroid and Nomi, teenagers create realistic interlocutors with unique traits and characteristics or communicate with companions created by other users. Some are even based on popular TV and movie characters, but still create an intensely personal connection with their creator.

Teens use these chatbots for a variety of purposes, including role-playing, exploring their academic and creative interests, and romantic or sexual exchanges.

SEE ALSO:

Why teenagers tell strangers their secrets on the Internet

But AI companions are designed to be captivating, and that’s where the trouble often starts, says Robbie Thorney, program manager at Common Sense Media.

The nonprofit organization recently released recommendations to help parents understand how AI companions work, as well as the warning signs that indicate the technology may be dangerous for their teenagers.

Torney said that while parents juggle a number of priority conversations with their teenagers, they should treat talking to them about AI companions as a “pretty urgent” matter.

Why parents should worry about AI companions

Adolescents, who are particularly at risk of isolation, can be drawn into a relationship with an AI chatbot that ultimately damages their mental health and well-being—with devastating consequences.

That’s what Megan Garcia claims happened to her son, Sewell Setzer III, c claim she recently submitted to Character.AI.

Within a year of starting a relationship with Character.AI’s sample partners game of thrones characters, including Daenerys Targaryen (“Daniy”), according to the lawsuit, Setzer’s life changed radically.

He became addicted to “Dani”, spending a lot of time with her every day. Their communication was both friendly and very sexual. Garcia’s lawsuit generally describes Setzer’s relationship with the companions as “sexually abusive.”

Mashable’s top stories

On occasions when Setzer lost access to the platform, he became frustrated. Later, the 14-year-old athlete stopped doing school and sports, began to lack sleep, and was diagnosed with mood disorders. He committed suicide in February 2024.

Garcia’s lawsuit seeks to hold Character.AI responsible for Setzer’s death, specifically because its product was designed to “manipulate Sewell — and millions of other young customers — into mixing reality with fiction,” among other dangerous defects.

Jerry Ruoti, Head of Trust and Security at Character.AI, said New York Times in the application that: “We want to acknowledge that this is a tragic situation and our hearts go out to the family. We take the security of our users very seriously and are constantly looking for ways to improve our platform.”

Given the danger to some teenagers’ lives that using an AI companion can pose, Common Sense Media’s recommendations include banning access to it for children under the age of 13, setting strict time limits for teens, banning use in isolated spaces such as the bedroom, and creating agreement with their teen that they will seek help for serious mental health problems.

Thorney says parents of teens interested in an AI companion should focus on helping them understand the difference between interacting with a chatbot and a real person, identifying signs that they’ve developed an unhealthy attachment to the companion, and developing a plan what to do what to do in this situation.

Warning signs that your teen’s AI partner is dangerous

Common Sense Media created its guidelines with the input and assistance of mental health professionals associated with Stanford Brainstorm Lab for Mental Health Innovation.

Although there is little research on how AI companions affect adolescent mental health, the recommendations are based on existing evidence of over-reliance on technology.

“The take-home principle is that AI companions should not replace real, meaningful human connection in someone’s life, and if it does, it’s important that parents take this into account and intervene early,” Dr. Declan Grubb, First AI Scientist at Stanford’s Brainstorm Lab for Mental Health, told Mashable in an email.

Parents should be especially careful if their teen is experiencing depression, anxiety, social problems, or isolation. Other risk factors include major life changes and being male, as boys are more prone to problematic technology use.

Signs that a teen has formed an unhealthy relationship with an AI companion include withdrawing from typical activities and friendships and poor performance in school, as well as preferring the chatbot to personal company, developing romantic feelings for it, and talking exclusively to it about teen issues is experiencing

Some parents may notice increased isolation and other signs of poor mental health, but not realize their teen has an AI companion. Indeed, the latest mass media is common sense studies have found that many teenagers used at least one type of artificial intelligence generative tool without their parents’ knowledge.

“There’s quite a bit of risk involved, so if there’s anything you’re worried about, talk to your child about it.”

– Robbie Thorney, Common Sense Media

Even if parents don’t suspect their teen is talking to an AI chatbot, they should talk to them about it. Thorney recommends approaching your teen with curiosity and an openness to learn more about their AI companion, if they have one. This may include observing the teen’s interactions with a partner and asking what aspects of the activity he or she likes.

Thorney urges parents who notice any warning signs of unhealthy use to act immediately by discussing it with their teen and, if necessary, seeking professional help.

“There’s quite a bit of risk involved, so if there’s anything you’re worried about, talk to your child about it,” Torney says.

If you are suicidal or experiencing a mental health crisis, talk to someone. You can contact the 988 Suicide and Crisis Lifeline at 988; Trans Lifeline at 877-565-8860; or Trevor Project at 866-488-7386. Text “START” to the Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI Monday through Friday 10:00 AM to 10:00 PM EST or by email (email protected). If you don’t like the phone, consider using the 988 suicide and crisis chat crisischat.org. Here is a list of international resources.