Skip to content
SUNDAY, MARCH 29, 2026
Analysis3 min read

What we’re watching next in other

By Jordan Vale

Business analyst reviewing charts and data on desk

Image / Photo by Scott Graham on Unsplash

The AI rulebook just got louder.

A chorus of regulatory signals is rising in the United States, as federal notices, standards work, and civil-liberties watchdogs converge on how artificial intelligence should be governed. The Federal Register has begun outlining proposed obligations tied to AI systems, while NIST rolls forward with updates to its AI Risk Management Framework. At the same time, civil-liberties groups, led by the Electronic Frontier Foundation, flag privacy and rights concerns, arguing that safeguards must be built in from the start. Taken together, the three streams sketch a near-term arc: more formal governance, clearer accountability for developers and users, and a still-developing map of penalties and enforcement.

The regulatory push, policy documents show, centers on transparency, risk assessment, and governance. The Federal Register notices do not yet spell out every enforcement trigger or fine level, but they repeatedly frame AI deployment as a risk management problem that requires systematic disclosure of capabilities, data sources, and potential harms. The wording aligns with broader goals of creating auditable systems and giving users some recourse if AI behaves badly. The ruling specifies that accountability extends beyond the model to the processes that train, deploy, and monitor it, signaling a shift from “how good is this model?” to “how robust is your governance around this model?”

NIST’s involvement anchors the effort in a technical, implementable framework. Policy documents indicate a push toward integrating AI-specific controls into the AI RMF, with emphasis on governance, risk assessment, and incident response. While the exact control families and assessment procedures may still evolve, the direction is clear: AI risk management should be built into procurement, development, and operations in the same way cyber risk is treated today. Compliance guidance states that organizations should begin mapping AI systems to the framework now, even as more precise criteria are being drafted.

For regular people, the upside is better safety nets, explainability in consumer-facing AI, and clearer assurances that benign products won’t surreptitiously harvest data or impose biased outcomes. The downside is a growing compliance burden on startups and incumbents alike, and potential chilling effects if enforcement interpretations are sharp or punitive before details are settled. The EFF updates emphasize this tension: while more governance can curb harms, it must protect privacy, minimize surveillance creep, and safeguard free expression.

Enforcement remains one of the main uncertainties. The discipline of penalties and per-violation fines is still taking shape, and agencies are balancing carrot-and-stick approaches with procurement leverage. What is clear is that the regulatory tempo is not slowing. The convergence of Federal Register activity, NIST’s RMF-aligned guidance, and civil-liberties scrutiny means compliance teams should start mapping systems now, prepare for audits, and design governance that can scale with evolving rules.

What we’re watching next in other

  • Timelines and deadlines: watch for Federal Register notices that set concrete compliance dates and enforcement triggers.
  • NIST RMF refinements: monitor updates to AI-specific controls and how they map to procurement and product development cycles.
  • Privacy and data rights safeguards: track EFF and other watchdogs’ pushes for privacy-by-design and robust redress pathways.
  • Coverage and exemptions: clarify who is covered by hikes in accountability (federal procurement vs. broader market) and where small developers fit.
  • Enforcement signals: look for early civil actions or guidance that signal how penalties will be structured and levied.
  • Sources

  • Federal Register - AI
  • EFF Updates
  • NIST News

  • Newsletter

    The Robotics Briefing

    Weekly intelligence on automation, regulation, and investment trends - crafted for operators, researchers, and policy leaders.

    No spam. Unsubscribe anytime. Read our privacy policy for details.