Skip to content
THURSDAY, MARCH 12, 2026
Analysis3 min read

What we’re watching next in other

By Jordan Vale

Business analyst reviewing charts and data on desk

Image / Photo by Scott Graham on Unsplash

Florida launches AI-harm watchdogs—train counselors, let the public report harms.

Florida officials have commissioned a first-of-its-kind alliance with the Future of Life Institute to confront what they call psychological and social harms from AI chatbots and companion apps. The directive, issued by Governor Ron DeSantis, calls for two tangible programs: a Crisis Counselor Training Curriculum and an AI Harms Reporting Form. Together, they aim to train mental health professionals to spot AI-related harms and give families a direct channel to flag incidents to state authorities. The partnership marks a notable shift: the release describes Florida as the first state—and, in some phrasing, the first government anywhere—to formalize this kind of collaboration between a governor’s office and an AI-safety organization.

The Crisis Counselor Training Curriculum is pitched as a clinical toolkit for licensed professionals. The idea is to provide frameworks and practical tools to recognize when interactions with AI chatbots or companion apps might be contributing to distress, manipulation, or other social harms in children and families. The AI Harms Reporting Form, by contrast, would institutionalize a consumer-facing avenue to document and escalate concerns about AI products. In the release, officials emphasize that the aim is not to police every digital interaction, but to flag patterns of harm early—enabling state agencies to respond, study trends, and perhaps inform future policy or product-design discussions.

Policy observers caution that Florida’s move sits at the intersection of mental health care and tech safety, a space many jurisdictions are only beginning to map. The emphasis on a state-level partnership with a research-institutional actor signals a growing appetite to translate AI risk into public-service infrastructure, rather than relying solely on industry-specific regulation or broad exhortations to “do no harm.” Yet specifics—such as data privacy safeguards for reporting, the criteria for “AI harm,” and how the curriculum will be rolled out across diverse counties—remain under-specified in official material. The approach could set a template for how other states frame psychosocial AI risk, even as it provokes questions about resource needs and privacy protections in a reporting pipeline.

For Florida families, the potential upside is clear: a formal mechanism to seek help if AI-based interactions feel harmful, and a trained cadre of clinicians who can assess and intervene when digital tools run afoul of emotional or social well-being. For clinicians, the curriculum promises structured guidance for identifying non-obvious harms linked to AI encounters, potentially elevating the role of mental health professionals in digital-safety contexts. For schools and communities that deploy AI-enabled tools for learning or social projects, the reporting form could become a signal that AI-related harms are an area deserving systematic attention and timely response.

Expert analysts note several practical constraints to watch:

  • Implementation and standardization: Without a clear rollout plan, adoption across Florida’s agencies could be uneven, diluting the program’s impact.
  • Privacy and data governance: The AI Harms Reporting Form will need robust safeguards to protect students, families, and teachers, and to define data-minimization and retention practices.
  • Resource alignment: Training and response capacity must match anticipated reporting volume to avoid backlogs or delayed interventions.
  • Diagnostic clarity: Distinguishing AI-induced distress from other mental-health factors requires careful clinical framing to avoid over-pathologizing everyday digital experiences.
  • What we’re watching next in other

  • How Florida defines “AI harm” and whether definitions are widened or narrowed based on field feedback.
  • The speed and scale of curriculum delivery across state agencies and any required budget plans.
  • Data privacy safeguards, including who has access to reported harms and how reports influence state action.
  • Early reporting trends: types of harms reported, geographic hotspots, and any resulting policy or program adjustments.
  • Interest from other states in replicating a similar model or forming partnerships with AI-safety researchers.
  • Sources

  • Governor DeSantis Directs Florida State Agencies to Partner with Future of Life Institute to Shield Families from AI Harm

  • Newsletter

    The Robotics Briefing

    Weekly intelligence on automation, regulation, and investment trends - crafted for operators, researchers, and policy leaders.

    No spam. Unsubscribe anytime. Read our privacy policy for details.