
When CEOs Say They Use AI, What Are They Really Telling Investors?
By Jordan Vale
Corporate filings are filling with AI talk, but the numbers and metrics that let investors, regulators, and workers judge those claims are thin. A new landscape study of 50 major firms finds reporting is patchy; Washington’s executive push for an AI action plan is amplifying the demand for hard metrics and auditability.
Why this matters now: Business models are being rewired by machine learning, and capital markets price risk. The Partnership on AI reviewed formal filings from 50 global companies-25 technology firms and 25 non-tech incumbents across financial services, healthcare, automotive, retail, and entertainment-and concluded disclosures vary wildly and rarely include comparable metrics. As the authors put it, "formal reports aren’t marketing materials, but resources to help decision-makers assess a company's financial health, societal impacts, and strategy over the short, medium, and long term." (Partnership on AI, Nov. 13, 2025).
A patchwork of prose, few comparable numbers
At the same time, the White House moved from exhortation to structure earlier this year. Executive Order 14179, signed in January 2025, required an Artificial Intelligence Action Plan; the administration published that plan, "Winning the Race: America’s AI Action Plan," on July 31, 2025. The plan directs agencies to build governance tools and reporting expectations, putting corporate disclosures squarely in the frame of national industrial and safety policy (CSET tracker, updated Nov. 6, 2025). Those parallel pressures - investor desire for decision-useful data and government demand for governance - create an urgent test for corporate reporting systems.
The Partnership on AI review shows a clear pattern: companies are willing to say AI is strategic, but not to quantify what that means. The sample-50 firms split evenly between AI developers and deployers-included high-level risk statements in 10-Ks and governance notes in sustainability reports, but comparable metrics such as incidence rates, model inventories, or performance broken out by subgroup were sparse.
The policy pressure cooker: federal plans and civic pushback
That gap matters. Investors and creditors rely on consistency. Standards bodies such as the International Sustainability Standards Board (ISSB) and the Global Reporting Initiative have spent years building common taxonomies for emissions and human-capital metrics; similar rigor is missing for AI. Partnership on AI highlights that sustainability standards like ISSB and ESRS provide a template, but AI-specific disclosures often lack trend data, likelihood estimates, and clear links to business relevance.
Put simply, most disclosures are qualitative. The study found that topics such as bias, privacy, liability, and security appear frequently, but are rarely quantified. Without numbers - for example, the percent of models with documented risk assessments, the frequency of privacy incidents tied to algorithmic systems, or compute-related emissions measured in kgCO2e - stakeholders cannot compare risk across firms or across reporting periods.
What good AI reporting looks like - and what boards should demand
Policy action is accelerating. EO 14179 and the July 31, 2025 Action Plan create a playbook for federal agencies to set expectations around safe deployment, testbeds, and risk governance. Georgetown’s Center for Security and Emerging Technology has tracked those provisions and timelines; its tracker, last updated Nov. 6, 2025, shows agencies assigned responsibilities and phased deadlines through 2026 and beyond. That makes corporate disclosure not just a market nicety but also an axis of compliance and procurement risk.
Local civic actors are pushing on a different axis. On Oct. 15, 2025, Alli Finn of AI Now testified to the Philadelphia City Council with a blunt policy line: "Invest in People, Not in Corporate Power." City-level scrutiny - procurement rules, impact assessments for police and social services, and demands for community-centered audits - is raising reputational and operational stakes for companies that supply public-sector AI.
In short, firms face a converging mandate: investors demand decision-useful reporting, federal agencies want governance aligned with national priorities, and municipalities are using contracts and hearings to force transparency. The result will be a higher bar for what counts as adequate disclosure.
What good AI reporting looks like - and what boards should demand
Sources
- Companies are Using AI More Than Ever. Can Their Formal Reporting Keep Pace? - Partnership on AI, 2025-11-13
- The Executive Order on Removing Barriers To American Leadership In Artificial Intelligence - Center for Security and Emerging Technology (CSET), 2025-11-06
- AI Now's Alli Finn Testifies at the Philadelphia City Council Committee on Technology and Information Services - AI Now Institute, 2025-10-14
- International Sustainability Standards Board (ISSB) - IFRS Foundation / ISSB, 2023-06-26