
When the Rules Don’t Align: Why Partnership on AI Is Trying to Glue a Fragmented Global AI Policy Landscape
By Jordan Vale
In conference rooms from Brussels to San Francisco, policy wonks and ethicists are trading worried glances. New laws, billions in defense contracts, and a raft of multistakeholder forums have left AI governance splintered; the Partnership on AI (PAI) is positioning itself as the pragmatic bridge‑builder the field now needs.
Fragmentation is no longer academic. From January 2020 through March 2025, lawmakers enacted 147 AI‑related laws, each carving different definitions, obligations, and enforcement approaches, according to a CSET analysis published November 4, 2025. At the same time, major labs and cloud providers are deepening ties with defense agencies and commercial partners, shifting incentives away from cooperative safety regimes.
A broker steps into the breach
That mismatch - more rules, more money for risky uses, and uneven representation of countries and civil‑society actors - is the immediate problem PAI says it can help solve. In October 2025 PAI announced an expanded European steering committee and ten new partners, bringing its membership to more than 140 organizations across 17 countries, and it is explicitly pitching itself as the honest broker between industry, governments, and civil society.
When safety takes a back seat to defense dollars
PAI has been quietly reshaping its role from convener to intermediary. On October 2, 2025, it launched its first international steering committee for Europe with co‑hosts including the BBC and the Centre for European Policy Studies, and on October 7 it added ten new partners ranging from research institutes to industry groups. "As AI capabilities evolve and adoption grows globally, working together across borders and sectors is more important than ever," PAI CEO Rebecca Finlay said in the announcement.
PAI’s pitch is practical: where multilateral fora - the UN, G7, OECD - often stall over geopolitical disagreements, a multistakeholder platform can prototype standards, coordinate evaluations, and push interoperable practices. PAI already lists priorities such as transparency in supply chains, shared definitions for "public AI" and "sovereign AI," and regionally grounded policy tools, all intended to reduce the costs of regulatory divergence for smaller states and civil‑society groups.
Patchwork rules, uneven power: who wins and who loses
When Safety Takes a Back Seat to Defense Dollars
The governance gap is widening at precisely the moment private incentives are shifting. The AI Now Institute documented a sharp turn toward defense engagements in 2024-25: OpenAI removed a ban on military uses in 2024 and by June 2025 had a Department of Defense deal reportedly worth $200 million; Anthropic signed a separate $200 million DoD contract and partnered with Palantir. Those moves, AI Now warned on September 25, 2025, have rerouted talent and attention away from public‑good safety research.
What a pragmatic governance agenda looks like
That realignment matters because it changes what gets prioritized in standards and audits. Independent voices - academics and civil‑society researchers - worry that models developed with defense funding will be optimized for robustness in contested scenarios rather than for minimizing harms in health care, education, or misinformation. "We can’t rely on companies grading their own homework," Amba Kak of the AI Now Institute told the UN General Assembly on September 25, 2025, arguing for independent scientific review and resourcing.
Patchwork Rules, Uneven Power: Who Wins and Who Loses
Policy divergence is already producing winners and losers. CSET’s analysis of 147 laws between 2020 and March 2025 shows governments are experimenting with sectoral and omnibus approaches; California‑style measures and EU‑style horizontal rules are both emerging. That plurality favors large firms that can absorb compliance costs and shape standards through litigation and lobbying; smaller firms, startups in the Global South, and community groups risk being squeezed out.
Sources
- Partnership on AI Welcomes 10 New Partners - Partnership on AI, 2025-10-07
- Global Consensus on AI is Fragmented, PAI has a Plan - Partnership on AI, 2025-10-02
- California’s Approach to AI Governance | Center for Security and Emerging Technology - CSET, Georgetown, 2025-11-04
- How AI safety took a backseat to military money - AI Now Institute, 2025-09-25
- AI Now Co-ED Amba Kak Gives Remarks Before the UN General Assembly on AI Governance - AI Now Institute, 2025-09-26