
Meta may halt AI development for systems deemed too risky under new Frontier AI Framework
Meta aims to make AGI publicly available but may halt development of AI systems deemed too risky. The Frontier AI Framework identifies “high-risk” and “critical-risk” systems that could aid in severe cybersecurity or biological attacks. High-risk systems may facilitate attacks, while critical-risk systems could lead to catastrophic outcomes. Meta will limit access to high-risk systems and halt development of critical-risk systems until they can be made safer.