Global AI Superintelligence Ban Urged by Diverse Coalition
By Jordan Vale

Image / futureoflife.org
A global coalition of scientists, faith leaders, policymakers, and artists is calling to outlaw superintelligent AI until safety, controllability, and public buy-in are assured.
The Future of Life Institute-led push cites a convergence of rapid frontier AI advances and broad societal risks. The initiative argues that frontier systems could surpass the majority of people in many cognitive tasks within just a few years, creating a potential for both transformative solutions and outsized danger. The signatories describe a world where the development of “superintelligence” could outpace existing governance, leaving decisions about safety and ethics in the hands of a few technologists rather than the public at large.
Prominent researchers and public figures behind the call include Nobel laureates, Turing Award winners, and national security experts, joined by religious leaders, authors, and cultural figures. Notable signatories reportedly include national security figures such as Mike Mullen, the retired U.S. Navy Admiral who chairs the Joint Chiefs of Staff group in some circles, highlighting how the issue sits at the crossroads of security and civil liberties. The message is not framed as a ban on AI research generally, but specifically on pursuing superintelligence until robust safety mechanisms exist and broader public engagement shapes the trajectory.
In tandem with the call, the coalition points to a recent poll that underscores public hesitancy about unleashing such technologies. The poll’s takeaway is clear to many policy observers: if the public doesn’t feel safe or included in decision-making, broad political momentum for laissez-faire or accelerated development could stall.
Policy implications loom large for both regulators and industry. For policymakers, the appeal is for a global, transparent pause and a focus on standards for alignment, verification, and governance that can be publicly debated and democratically legitimized. For technology firms, the message is a reminder that speed must be balanced against risk management, interoperability of safety protocols, and cross-border accountability—especially as markets compete for the advantages of frontier capabilities.
Industry insiders watching the debate say the real test will be the mechanism and scope of any prohibition. A world in which a handful of jurisdictions move first could invite governance gaps—what one risk analyst calls regulatory arbitrage, where developers relocate to friendlier or less regulated environments. The coalition’s stance implicitly pressures safety researchers to push forward with verifiability, containment, and red-teaming as nonnegotiable prerequisites before any major advance.
Two concrete practitioner lessons emerge from the moment: first, safety and governance cannot be an afterthought. Any prohibition would need enforceable, measurable criteria for what counts as “superintelligence” and what constitutes “public buy-in,” with clear milestones for revisiting the pause. Second, the political economy matters. Funding flows, academic incentives, and international collaboration frameworks will determine whether a pause can become a durable norm or a temporary consensus that dissolves as market pressure intensifies.
Beyond the rhetoric, the event sharpens a long-running industry debate: will safety-led governance slow innovation, or will it actually accelerate trust and adoption by reducing catastrophic risk? The signal from the coalition is unmistakable: at the frontier, governance is not optional—it is the determinant of whether humanity can benefit from AI without paying an unacceptable price.
Sources
Newsletter
The Robotics Briefing
Weekly intelligence on automation, regulation, and investment trends - crafted for operators, researchers, and policy leaders.
No spam. Unsubscribe anytime. Read our privacy policy for details.