close
close

Meta Removes Hate Rules Citing ‘Recent Election’

Meta Removes Hate Rules Citing ‘Recent Election’

In preparation, Meta abandoned its platforms with more than just fact-checking the second Trump administration.

The social media giant has also loosened its rules on hate and abuse — again following the example of Elon Musk’s X — particularly when it comes to sexual orientation and gender identity, as well as immigration status.

The changes are worrying advocates for vulnerable groups, who say Meta’s decision to cut back on content moderation could lead to real harm.

CEO of Meta Mark Zuckerberg said on Tuesday that the company would “remove restrictions on topics like immigration and gender that are not relevant to the mainstream discourse,” citing the “recent election” as a catalyst.

For example, Meta has added to its rules — the so-called Community Standards — which users are asked to follow:

“We allow accusations of mental illness or abnormality if they are based on gender or sexual orientation, given the political and religious discourses of transgenderism and homosexuality, as well as the common frivolous use of words like ‘queer.’

In other words, it’s now legal to call gay people mentally ill on Facebook, Threads, and Instagram. Other images and what Meta calls “harmful stereotypes historically linked to bullying” — such as “Blackface” and Holocaust denial — are still banned.

The Menlo Park, Calif.-based company also removed a sentence from its “policy statement” explaining why it prohibits certain hateful conduct.

The deleted sentence says that hate speech “creates an environment of intimidation and alienation, and in some cases can promote violence offline.”

“The policy change is a tactic to curry favor with the new administration and reduce the business costs of content moderation,” said Ben Leiner, a professor at the University of Virginia’s Darden School of Business who studies political and technology trends.

“This decision will cause real harm not only in the United States, where there has been a surge of hate and misinformation on social media platforms, but also abroad, where misinformation on Facebook has fueled ethnic conflicts in places like Myanmar.”

In fact, Meta admitted in 2018 that it had not done enough to prevent its platform from being used to “incite offline violence” in Myanmar, inciting international hatred and violence against the country’s population. Rohingya Muslim minority.

Arturo Bejar, a former Meta CTO known for his expertise in combating online harassment, said that while more attention was paid to the company’s fact-checking announcement on Tuesday, he was more concerned about changes to Meta’s policy on harmful content.

This is because instead of proactively enforcing rules against things like self-harm, bullying and harassment, Meta will now rely on user reports before taking any action. The company said it plans to focus its automated systems on “combating illegal and serious violations such as terrorism, child sexual exploitation, drugs, fraud and fraud.”

Bejar said this is despite “Meta knowing that by the time the report is filed and verified, the content will be more damaging.”

“I shudder to think what these changes will mean for our youth. Meta abdicates responsibility for safety and we will not influence these changes because Meta refuses to be transparent about the harm teenagers are experiencing and they go to extreme lengths. soften or stop legislation that could help,” he said.