What we’re watching next in other
By Jordan Vale
Image / Photo by Unseen Studio on Unsplash
Florida just made AI harms a state issue.
Governor Ron DeSantis has directed Florida state agencies to partner with the Future of Life Institute to design two initiatives aimed at shielding families from the psychological and social harms linked to AI chatbots and companion apps. The collaboration marks the first formal state-level partnership of its kind between a governor’s office and a leading AI-safety organization. The plan centers on a Crisis Counselor Training Curriculum to equip licensed mental health professionals with tools to recognize and respond to AI-related harms, and an AI Harms Reporting Form that allows parents, guardians, or teachers to file formal reports with state authorities.
DeSantis framed the move as a protective measure for children and families in the digital frontier. In the release, he argued that AI companion apps are “targeting our kids—building emotional dependency, exploiting vulnerabilities, and destroying families,” and cast the partnership as a way to give Florida families a voice and counselors the necessary training. The Crisis Counselor Training Curriculum is designed to provide clinical frameworks for identifying when AI interactions become harmful, while the AI Harms Reporting Form aims to create a clear channel for public concerns to reach state agencies.
Experts see the effort as a notable shift toward integrating AI safety concerns directly into public-health and child-welfare infrastructure, even as it stops short of creating new regulations. While the projects do not impose mandates or penalties, they could set a test bed for how state services coordinate mental health, education, and digital-safety oversight in real time. The move also raises questions about privacy, data handling, and how publicly funded services will respond to rapidly evolving AI tools that operate in homes, schools, and social networks.
Industry observers note a few critical levers and risks. The Florida model will depend on adequate staffing and funding to reach clinicians across 67 counties, continuous updating of training materials as AI products evolve, and careful design of the reporting form to avoid overwhelming agencies while protecting user privacy. If the curricula prove effective, other states could imitate the approach, potentially pushing federal policymakers to consider similar public-health pathways for AI safety. Conversely, stalled uptake by clinicians or public concern over data sharing could blunt the program’s impact.
What we’re watching next in other
Sources
Newsletter
The Robotics Briefing
Weekly intelligence on automation, regulation, and investment trends - crafted for operators, researchers, and policy leaders.
No spam. Unsubscribe anytime. Read our privacy policy for details.