close
close

Researchers from Illinois are studying the use of generative artificial intelligence by teenagers and safety issues

Researchers from Illinois are studying the use of generative artificial intelligence by teenagers and safety issues

December 12 – CHAMPAGNE – Teenagers are using generative artificial intelligence for many purposes, including emotional support and social interaction. A study conducted by researchers at the University of Illinois at Urbana-Champaign found that parents have a poor understanding of traffic police, how their children use it and the potential risks, and that traffic police platforms do not offer enough protection to keep children safe.

The research paper by information science professor Yang Wang, co-director of the Social Computing Systems Laboratory, and doctoral student Yaman Yu is one of the first published sources of data on the use and risks of traffic accidents for children. Wang and Yu will present their findings in May 2025 at the IEEE Symposium on Security and Privacy, the premier computer security conference.

Wang and Yu said teenagers often use traffic police platforms, but little is known about how they use them, and their perception of risk and how they deal with it have not been previously studied by researchers.

The researchers analyzed the content of 712 posts and 8,533 comments on Reddit that related to teen use of traffic police. They also interviewed seven teenagers and 13 parents to understand their perceptions of safety and how parents tried to reduce risk.

They found that teenagers often use traffic police chatbots as therapeutic assistants or confidants to provide them with non-judgmental emotional support and help them cope with social problems. AI chatbots are built into social media platforms like Snapchat and Instagram, and teenagers include them in group chats, use them to learn social skills, and sometimes treat them as romantic partners. They use GAI for academic purposes such as writing essays, paraphrasing text, and generating ideas. Teens also posted on Reddit about requests for sexual or violent content and bullying AI chatbots.

“It’s a very hot topic, a lot of teenagers are talking about character AI and how they’re using it,” Yu said, referring to the platform for creating and interacting with character-based chatbots.

Wang and Yu reported that both parents and children had significant errors with generative AI. Parents had little understanding of their children’s use of traffic lights, and their exposure to the tools was limited. They were unaware of their children’s use of tools like Midjourney and DALL-E to create images and AI characters. They viewed the AI ​​as a homework tool and functioned as a search engine, while the children mainly used it for personal and social reasons, the researchers said.

Teens reported concerns about over-reliance or dependence on chatbots to fill the gap in personal connections, using chatbots to create offensive content, unauthorized use of their personal information, and sharing harmful content such as racist remarks. They were also concerned about artificial intelligence replacing human labor and intellectual property infringement.

Parents believed that AI platforms collected large amounts of data, such as user demographics, conversation history, and browser history, and were concerned that children were sharing personal or family information. However, parents “did not fully appreciate the amount of sensitive data their children may share with traffic police…including details of personal injuries, medical records, and private aspects of their social and sexual lives,” the researchers wrote. Parents were also concerned that children were inadvertently spreading misinformation, and worried that an over-reliance on artificial intelligence would cause their children to avoid critical thinking.

Parents said they wanted a dedicated AI for kids that only learns through age-appropriate content, or a system with built-in age and theme controls. The researchers reported that the children said that their parents did not advise them about the specific use of traffic lights and that they wanted their parents to discuss its ethical use rather than restrict it.

GAI platforms provide limited protection for children, focus on restricting explicit content, and do not offer AI-friendly parental controls. Both the risks to children and mitigation strategies are more complex and detailed than simply blocking objectionable content, Wang and Yu said. One of the key challenges to identifying and preventing objectionable content on traffic police platforms is the dynamic nature of creating unique content in real-time versus static online content, they said.

The researchers said it is critical that platforms provide transparent explanations of security and privacy risks identified by experts, and recommended offering content filters that can be tailored to the needs of individual families and their children’s developmental stages.

However, safety strategies cannot be purely technical and must go beyond filtering and restrictions, recognizing the tension between children’s autonomy and parental control over online risk management. Wang and Yu said that adults should first understand the motives behind children’s behavior in traffic accidents. They proposed a support chatbot that could create a safe environment for explaining potential risks, building resilience and suggesting coping strategies for teenage users.

“Artificial intelligence technologies are evolving very quickly, as are the ways people use them,” Wang said. “There are some things we can learn from past domains, such as addiction and inappropriate behavior in social media and online gaming.”

Wang said their research is the first step in solving the problem. He and Yu create a taxonomy of risk categories that can be used to discuss risks and interventions to help mitigate them. It will also help detect early signals of risky behavior, including the time spent on the traffic police platform, the content of conversations and usage patterns, such as the time of day when children use the platforms, Wang said.

He and Yu are working with Illinois psychology professor Karen Rudolph, director of the Family Research Laboratory, whose research focuses on adolescent development, to create age-appropriate interventions.

“It’s a very interdisciplinary topic and we’re trying to tackle it in interdisciplinary ways, bringing in education, psychology and our knowledge of safety and risk management. It should be a technical solution and a solution of social interaction,” — Yu. said