close
close

Meta lifts hate rules as Zuckerberg cites “recent election” as catalyst

Meta lifts hate rules as Zuckerberg cites “recent election” as catalyst

(AP) – It wasn’t just like that fact checking that Meta abandoned its platforms in preparation for a second Trump administration. The social media giant has also loosened its rules on hate and abuse — again following the example of Elon Musk’s X — particularly when it comes to sexual orientation and gender identity, as well as immigration status.

The changes are worrying advocates for vulnerable groups, who say Meta’s decision to cut back on content moderation could lead to real harm. Meta CEO Mark Zuckerberg said Tuesday that the company would “remove restrictions on topics like immigration and gender that don’t touch the mainstream discourse,” citing the “recent election” as a catalyst.

For example, Meta added the following to its rules — call community standards — users are requested to observe:

“We allow accusations of mental illness or abnormality if they are based on gender or sexual orientation, given political and religious discourses about transgenderism and homosexuality, and the common casual use of words like ‘queer.’ In other words, it’s now legal to call gay people mentally ill on Facebook, Threads, and Instagram. Other images and what Meta calls “harmful stereotypes historically associated with bullying” — such as blackface and Holocaust denial — are still banned.

The Menlo Park, Calif.-based company also removed a sentence from its “policy statement” explaining why certain hateful conduct is prohibited. The deleted sentence says that hate speech “creates an environment of intimidation and alienation, and in some cases can promote violence offline.”

“The policy change is a tactic to curry favor with the new administration and reduce the business costs of content moderation,” said Ben Leiner, a professor at the University of Virginia’s Darden School of Business who studies political and technology trends. “This decision will cause real harm not only in the United States, where there has been a surge of hate and misinformation on social media platforms, but also abroad, where misinformation on Facebook has fueled ethnic conflict in places like Myanmar.”

The goal, in fact, recognized in 2018 that he did not do enough to prevent his platform from being used to “incite offline violence” in Myanmar, fueling internecine hatred and violence against the country’s Rohingya Muslim minority.

Arturo Bejara former Meta CTO known for his expertise in combating online harassment said that while most of the attention was on the company’s fact-checking announcement on Tuesday, he was more concerned about changes to Meta’s policy on harmful content.

This is because instead of proactively enforcing rules against things like self-harm, bullying and harassment, Meta will now rely on user reports before taking any action. The company said it plans to focus its automated systems on “combating illegal and serious violations such as terrorism, child sexual exploitation, drugs, fraud and fraud.”

Bejar said this is despite “Meta knowing that by the time the report is filed and verified, the content will be more damaging.”

“I shudder to think what these changes will mean for our youth. Meta abdicates responsibility for safety and we don’t know the impact of these changes because Meta refuses to be transparent about the harm teenagers are experiencing and they go into emergency mode. long to dilute or stop legislation that could help,” he said.