close
close

Meta cancels hate rules

Meta cancels hate rules

In preparation for the second Trump administration, Meta abandoned its platforms NOT simply by fact-checking. The social media giant has also loosened its rules on hate speech and abuse — again in the vein of Elon Musk’s X — particularly when it comes to sexual orientation and gender identity, as well as immigration status.

The changes are worrying advocates for vulnerable groups, who say Meta’s decision to cut back on content moderation could lead to real harm. Meta CEO Mark Zuckerberg said Tuesday that the company would “remove restrictions on topics like immigration and gender that are not relevant to the mainstream discourse,” citing the “recent election” as a catalyst.

For example, Meta has added to its rules — the so-called Community Standards — which users are asked to follow:

“We allow accusations of mental illness or abnormality if they are based on gender or sexual orientation, given the political and religious discourse about transgenderism and homosexuality, as well as the common frivolous use of words like ‘queer.’

In other words, it’s now legal to call gay people mentally ill on Facebook, Threads, and Instagram. Other images and what Meta calls “harmful stereotypes historically associated with bullying” — such as blackface and Holocaust denial — are still banned.

The Menlo Park, Calif.-based company also removed a sentence from its “policy statement” explaining why it prohibits certain hateful conduct. The deleted sentence says that hate speech “creates an environment of intimidation and alienation, and in some cases can promote violence offline.”

“The policy change is a tactic to curry favor with the new administration and reduce the business costs of content moderation,” said Ben Leiner, a professor at the University of Virginia’s Darden School of Business who studies political and technology trends. “This decision will cause real harm not only in the United States, where there has been a surge of hate and misinformation on social media platforms, but also abroad, where misinformation on Facebook has fueled ethnic conflicts in places like Myanmar.”

In fact, Meta admitted in 2018 that it had not done enough to prevent its platform from being used to “incite offline violence” in Myanmar, inciting international hatred and violence against the country’s Rohingya Muslim minority.

Arturo Bejar, Meta’s former CTO known for his expertise in combating online harassment, said that while most of the attention was on the company’s fact-checking announcement on Tuesday, he was more concerned about changes to Meta’s policy on harmful content.

This is because instead of proactively enforcing rules against things like self-harm, bullying and harassment, Meta will now rely on user reports before taking any action. The company said it plans to focus its automated systems on “combating illegal and serious violations such as terrorism, child sexual exploitation, drugs, fraud and fraud.”

Bejar said this is despite “Meta knowing that by the time the report is filed and verified, the content will be more damaging.”

“I shudder to think what these changes will mean for our youth. Meta abdicates responsibility for safety and we don’t know the impact of these changes because Meta refuses to be transparent about the harm teenagers are experiencing and they go into emergency mode. long to dilute or stop legislation that could help,” he said. (AP)