Particle.news

Download on the App Store

Meta Introduces Framework to Limit Release of High-Risk AI Systems

The company’s new policy outlines measures to address potential catastrophic risks posed by advanced AI models.

  • Meta's Frontier AI Framework categorizes AI systems into 'high risk' and 'critical risk' based on their potential to cause harm.
  • High-risk AI systems could facilitate cyberattacks or aid in the development of chemical and biological weapons, but with mitigable risks.
  • Critical-risk systems are defined as those that could lead to catastrophic outcomes, such as fully automated cyberattacks or the proliferation of biological weapons, with no viable mitigation strategies.
  • Meta states it will halt development and prevent the release of critical-risk AI models until they can be deemed safe for deployment.
  • The policy highlights Meta’s response to criticism of its open AI development strategy and contrasts with competitors like China’s DeepSeek, which lacks similar safeguards.
Hero image