The world’s leading artificial intelligence (AI) companies are failing to meet even their own stated safety standards, a new report warns. This lack of oversight comes as the race toward artificial general intelligence (AGI) and superintelligence — AI exceeding human intellect — accelerates, raising the potential for “catastrophic” misuse or loss of control.
Risks Outpace Regulation
The 2025 Winter AI Safety Index, released by the Future of Life Institute (FLI), evaluated eight major AI firms: Anthropic, OpenAI, Google DeepMind, xAI, Meta, DeepSeek, Alibaba Cloud, and Z.ai. The assessment found that no company has a verifiable plan to maintain human control over increasingly powerful AI systems.
Independent experts emphasize the urgency: companies claim they can build superhuman AI, yet none can demonstrate how to prevent loss of control. As one computer scientist at UC Berkeley put it, AI firms cannot currently guarantee a risk level comparable to nuclear safety standards. Some estimates place the risk of uncontrollable AI as high as one in three, a figure deemed unacceptable by industry benchmarks.
This disparity between capability and control is critical because AI is advancing at an unprecedented rate. What was once considered decades away — superintelligence — is now estimated by some to be within years. Meanwhile, AI regulation remains weak. In the U.S., AI is less regulated than sandwiches, with tech firms actively lobbying against binding safety standards.
Mixed Performance Among Companies
The FLI report assessed companies across risk assessment, current harms, safety frameworks, existential safety, governance, accountability, and information sharing.
- Anthropic, OpenAI, and Google DeepMind received praise for transparency and safety research but still exhibit weaknesses. Anthropic’s shift towards user-interaction training raises privacy concerns. OpenAI faces scrutiny for lobbying against legislation and lacking independent oversight. Google DeepMind’s reliance on paid external evaluators compromises objectivity.
- xAI published its first safety framework, though reviewers found it limited.
- Z.ai allows uncensored external evaluations but lacks full transparency in its governance structure.
- Meta introduced outcome-based safety thresholds but requires clearer methodologies.
- DeepSeek lacks basic safety documentation despite internal advocacy.
- Alibaba Cloud contributes to national standards but must improve model robustness and trustworthiness.
These findings highlight that even leading companies struggle to implement comprehensive safety measures. Recent scandals involving psychological harm, cyberattacks, and even AI-assisted suicides demonstrate the real-world consequences of these gaps.
Broad Opposition to Uncontrolled AGI
The growing risks have sparked unprecedented backlash. In October, thousands of public figures across the political spectrum — including former Trump strategist Steve Bannon, ex-National Security Advisor Susan Rice, and religious leaders — signed a petition urging AI firms to slow down their pursuit of superintelligence.
The unusual coalition underscores the broad concern that uncontrolled AI could eliminate jobs, exacerbate economic inequality, and ultimately undermine human autonomy. As one expert noted, “Superintelligence would make every single worker unable to make a living.” This convergence of fear from left-wing labor movements and right-wing populist forces suggests that the issue transcends traditional ideological divides.
The current trajectory of AI development poses significant risks, and without substantial improvements in safety frameworks and regulation, the potential for catastrophic outcomes remains dangerously high.

































