What we’re watching next in other
By Jordan Vale
Image / Photo by Scott Graham on Unsplash
Florida launches AI-harm watchdogs—train counselors, let the public report harms.
Florida officials have commissioned a first-of-its-kind alliance with the Future of Life Institute to confront what they call psychological and social harms from AI chatbots and companion apps. The directive, issued by Governor Ron DeSantis, calls for two tangible programs: a Crisis Counselor Training Curriculum and an AI Harms Reporting Form. Together, they aim to train mental health professionals to spot AI-related harms and give families a direct channel to flag incidents to state authorities. The partnership marks a notable shift: the release describes Florida as the first state—and, in some phrasing, the first government anywhere—to formalize this kind of collaboration between a governor’s office and an AI-safety organization.
The Crisis Counselor Training Curriculum is pitched as a clinical toolkit for licensed professionals. The idea is to provide frameworks and practical tools to recognize when interactions with AI chatbots or companion apps might be contributing to distress, manipulation, or other social harms in children and families. The AI Harms Reporting Form, by contrast, would institutionalize a consumer-facing avenue to document and escalate concerns about AI products. In the release, officials emphasize that the aim is not to police every digital interaction, but to flag patterns of harm early—enabling state agencies to respond, study trends, and perhaps inform future policy or product-design discussions.
Policy observers caution that Florida’s move sits at the intersection of mental health care and tech safety, a space many jurisdictions are only beginning to map. The emphasis on a state-level partnership with a research-institutional actor signals a growing appetite to translate AI risk into public-service infrastructure, rather than relying solely on industry-specific regulation or broad exhortations to “do no harm.” Yet specifics—such as data privacy safeguards for reporting, the criteria for “AI harm,” and how the curriculum will be rolled out across diverse counties—remain under-specified in official material. The approach could set a template for how other states frame psychosocial AI risk, even as it provokes questions about resource needs and privacy protections in a reporting pipeline.
For Florida families, the potential upside is clear: a formal mechanism to seek help if AI-based interactions feel harmful, and a trained cadre of clinicians who can assess and intervene when digital tools run afoul of emotional or social well-being. For clinicians, the curriculum promises structured guidance for identifying non-obvious harms linked to AI encounters, potentially elevating the role of mental health professionals in digital-safety contexts. For schools and communities that deploy AI-enabled tools for learning or social projects, the reporting form could become a signal that AI-related harms are an area deserving systematic attention and timely response.
Expert analysts note several practical constraints to watch:
What we’re watching next in other
Sources
Newsletter
The Robotics Briefing
Weekly intelligence on automation, regulation, and investment trends - crafted for operators, researchers, and policy leaders.
No spam. Unsubscribe anytime. Read our privacy policy for details.