AI Regulation: Focus on Use, Not Models
By Sophia Chen
Image / Photo by ThisisEngineering on Unsplash
The push to regulate artificial intelligence is stumbling over a simple truth: you can’t just control the code.
Regulatory efforts across the globe have converged on the notion that managing AI's impact requires addressing the technology's usage rather than the underlying models that power it. For instance, China's regulatory framework, introduced in 2021, emphasizes provider accountability and content governance, mandating strict oversight through platform control and recordkeeping. This approach sets a precedent that other regions are now scrutinizing, particularly in Europe and the United States, where the regulatory landscape is still in flux.
The European Union AI Act of 2024 aims to establish a comprehensive framework, but early proposals for updates are emerging, suggesting that even the EU recognizes the need for flexibility in addressing rapidly evolving technologies. Meanwhile, India's initiative to create a governance system, released in late 2025, reflects a growing acknowledgment of the need for robust, context-specific regulations. However, it remains to be seen whether these frameworks will effectively mitigate real-world risks.
In the United States, the regulatory environment is particularly convoluted. While states are pushing forward with their own AI legislation, the federal government is striving to curtail these initiatives, aiming for a more uniform approach. This tug-of-war raises critical questions about the efficacy of model-centric regulations. For instance, California’s Transparency in Frontier Artificial Intelligence Act suggests that licensing and weight restrictions could provide oversight. However, as multiple experts have pointed out, such measures are ultimately futile because once a model is shared, it can proliferate uncontrollably.
Engineering documentation shows that model weights and code are digital artifacts that replicate at negligible cost. When a model is released—whether intentionally or through a data breach—it becomes impossible to retract. This reality creates a dilemma: while companies may strive for compliance, they are often left vulnerable to the unregulated distribution of their technologies. The result is a situation where compliant firms find themselves at a competitive disadvantage against those who skirt regulations.
The core challenge lies in understanding that the dangers associated with AI stem not just from the models themselves, but from their application in real-world scenarios. As AI systems are integrated into critical sectors such as healthcare, transportation, and finance, the implications of their use become increasingly pronounced. For instance, a humanoid robot designed for caregiving must be equipped with ethical guidelines and operational protocols that govern its interaction with vulnerable populations, rather than simply adhering to a checklist of model specifications.
Regulating the use of AI—rather than the models—allows for a more pragmatic approach. By focusing on the deployment of AI technologies, regulators can establish guidelines that address ethical considerations, data privacy, and safety without stifling innovation. This shift also aligns with the industry’s growing emphasis on responsible AI practices, which prioritize transparency and accountability in the development and application of these systems.
As the dialogue around AI regulation continues to evolve, stakeholders from across sectors must collaborate to forge guidelines that reflect the complexities of AI use. The emphasis on practical applications and real-world implications will be essential in ensuring that regulations not only protect the public but also foster an environment where technological advancements can thrive without jeopardizing safety or ethical standards.
Ultimately, the future of AI regulation hinges on a clear understanding of its use cases. By prioritizing the oversight of AI applications over the models themselves, we can better navigate the intricate landscape of artificial intelligence and its profound impact on society.
Sources
Newsletter
The Robotics Briefing
Weekly intelligence on automation, regulation, and investment trends - crafted for operators, researchers, and policy leaders.
No spam. Unsubscribe anytime. Read our privacy policy for details.