close
close

TikTok targets beauty filters to support teen mental health

TikTok targets beauty filters to support teen mental health

Listen to the article

TikTok will soon block users under 18 from accessing beauty filters that change facial features, as the platform responds to growing concerns about the impact on teenagers’ mental health and self-esteem.

The restrictions, which are expected to be rolled out globally in the coming weeks, will apply to filters such as Bold Glamour, which smoothes skin, tightens lips and reshapes eyes – effects that are often hard to distinguish from the real thing. Filters intended for comedic purposes, such as those that add animal ears or exaggerated facial features, will remain available to teenagers.

The changes follow a report by Internet Matters, a non-profit organization for children’s online safety, which found that these filters contribute to a “distorted worldview” by normalizing perfect images. Many teens, especially girls, reported feeling pressured to conform to these altered appearances, with some saying they found their unfiltered faces “ugly” after long-term use of filters.

Dr Nikki Su, TikTok’s head of safety and wellbeing in Europe, confirmed the new restrictions, saying the platform aims to “reduce social pressure on young users” and promote healthier online habits.

The effectiveness of these restrictions will depend on accurate age verification. TikTok plans to introduce automated systems using machine learning to identify users who misrepresent their age as part of a broader effort to remove underage users. The platform currently removes approximately 20 million accounts per quarter for age policy violations.

TikTok’s Chloe Setter, head of state child safety policy, acknowledged the problems but said the company would take a “safety-first approach” even if it resulted in some users being mistakenly blocked. Users will be able to appeal bans if they believe they were removed in error.

The policy changes coincide with tougher rules in the UK under the upcoming Internet Safety Act, which will require social media platforms to implement “highly effective” age verification. Ofcom, the UK’s communications regulator, has previously raised concerns about TikTok’s age-verification measures, saying they have “not yet been established” to be effective.

Richard Collard, assistant head of child online safety policy at the NSPCC, welcomed TikTok’s move but stressed more needed to be done. “This is a positive step, but it is only the tip of the iceberg. Other social media platforms should follow suit and implement robust age verification systems,” he said.

Chloe Setter echoed those sentiments, saying TikTok will continue to improve its security measures as it faces increased scrutiny from regulators. The platform also plans to tighten restrictions on users under the age of 13, a demographic that has historically been difficult to police.

Other platforms are taking similar steps. Roblox recently restricted younger users from accessing violent or explicit content, and Instagram, which is owned by Meta, introduced “teen accounts” that allow parents to monitor and control their children’s activities on the app.

Andy Burroughs, CEO of the Molly Rose Foundation, emphasized the role of regulation in driving this change. “It is clear that platforms are only making these changes to comply with EU and UK input regulations. This highlights the need for even more ambitious legislation to protect children online,” he said.

As the year draws to a close, TikTok’s global push to ensure user age compliance signals a broader social media shift to prioritize user safety — a trend expected to accelerate when new rules take effect in 2025.