Particle.news

Download on the App Store

Meta’s Free-Expression Shift Slashes Content Takedowns as Harmful Posts Increase

Meta is deploying large language models with a crowd-sourced fact-check system to boost content moderation accuracy

Image
Image
Meta boss Mark Zuckerberg at the fights
Image

Overview

  • Meta removed about 1.6 billion pieces of content for rule violations in Q1 2025, down from nearly 2.4 billion in the previous quarter after loosening enforcement.
  • The company reports a roughly 50 percent reduction in enforcement mistakes on its U.S. platforms between Q4 2024 and Q1 2025.
  • Facebook saw violent and graphic content prevalence rise to 0.09 percent in Q1, and bullying and harassment climb to 0.07–0.08 percent following policy changes.
  • Meta retains proactive moderation for teen accounts to shield younger users from harmful posts such as bullying.
  • The firm has rolled out a community-driven fact-check system similar to Community Notes and is piloting large language models to improve review efficiency.