Building Trust and Defining the Future at PAI’s 2025 Partner Forum
Analysis·4 min read

Trust, Chips and Councils: How Agentic AI Is Forcing New Rules for a Fragmented World

By Jordan Vale

At Partnership on AI’s San Francisco forum, about a dozen researchers debated whether an AI agent should be treated like a contractor or a tool, while a parallel drama played out in Beijing and Taipei over chip licenses. The technical promise of autonomous agents is colliding with fractured governance and a rapidly tightening hardware choke point.

Agentic AI - systems that plan and act across platforms with minimal human input - moved from academic white papers into boardrooms this year. At PAI’s Partner Forum on October 30, 2025, speakers warned that adoption will stall unless real-time safety and social trust are solved in parallel; Rebecca Finlay, PAI’s CEO, framed trust as “a journey, not a destination." (PAI launched a new SAIGE advisory council on October 29 to push that work.)

Agents meet accountability: PAI’s practical pivot

At the same time, geopolitics has injected acute fragility into the AI stack. Beijing’s recent ban on some Nvidia H20-class accelerators, and the fraught licensing negotiations that preceded it, exposed how quickly access to compute can be weaponized. That combination - technical risk paired with supply-chain squeeze - makes governance urgent, not academic.

The Partnership on AI forum in late October pivoted the field’s conversation from abstract ethics to operable safeguards. Panels focused on “real-time failure detection” - a technical approach PAI highlighted in a recent report as critical for systems that can take multi-step actions without human sign-off (PAI, October 30, 2025).

When compute becomes geopolitics

Speakers threaded social concerns into engineering trade-offs. Paula Goldman of Salesforce said, “This has been the year of tremendous momentum for AI agents,” arguing that safety mechanisms will accelerate rather than throttle adoption. UC Berkeley’s Dawn Song pushed back in part, noting that guardrails must be engineered into both models and the orchestration layer that gives them agency.

PAI’s new SAIGE Council, announced October 29, 2025, brings together interdisciplinary expertise - from economics to cognitive science - with the explicit remit of advising on agentic AI, environmental impacts and labor effects. The council’s membership list includes Joaquín Quiñonero Candela and David Danks, positioning PAI to translate high-level norms into operational checklists that practitioners can use.

Standards, tools and the practitioner gap

The policy talk had to contend with an immediate practical constraint: chips. AI Now analyzed a turbulent summer in which Nvidia lobbied hard for export licenses - ultimately securing restricted permissions in August 2025, only to face a fresh ban from China’s Cyberspace Administration in late October (AI Now, October 31, 2025).

That sequence exposed a brittle procurement landscape. Companies that had planned multi-month model training schedules suddenly faced uncertain access to H20-tier accelerators, costing time and money. For context, a single large training run for a modern generative model can consume thousands of GPU-hours, translating into six-figure cloud bills or longer waits for on-premise clusters.

The hardware squeeze reshapes incentives for governance. States and large firms now wield leverage that can enforce conformity or fragment markets. If sanctioning compute becomes a credible tool of industrial policy, standards and regulatory regimes will have to account for strategic denial, not just technical compliance.

Who gains and who loses as rules take shape

Standards, tools and the practitioner gap

A recurring theme at PAI and in CSET’s recent writing is the gap between high-level frameworks and the realities of companies trying to deploy AI responsibly. CSET warned that the proliferation of voluntary standards, contracts and terms of service creates a maze many organizations cannot navigate (CSET, October 23, 2025).

This practitioner gap matters because trust is behavioral, not rhetorical. Partnership on AI highlighted a commonly cited finding: a recent MIT-linked study, mentioned at the forum, estimated that about 95% of firms experimenting with AI pilots see little to no return on investment. That failure rate often traces to governance deficits - unclear accountability, opaque procurement rules and insufficient monitoring tools.

Operationalizing safety means two things at once: lowering the barrier to compliance for smaller teams and hardening the systems that scalable actors depend on. PAI’s SAIGE Council intends to produce advice that is both normative and implementable: checklists, incident-response playbooks and metrics for real-time anomaly detection in agent behavior.

Sources