Skip to content
TUESDAY, FEBRUARY 3, 2026
Humanoids3 min read

Regulation Debate: Why AI Models Shouldn't Be the Focus

By Sophia Chen

Humanoids technology
Category illustrationUnsplash

Nobody saw this coming: the conversation around AI regulation is shifting from the models themselves to their actual use.

As governments around the world scramble to establish frameworks for artificial intelligence, a clear consensus is emerging—regulatory efforts need to focus on how AI is employed rather than on the underlying models that power them. This paradigm shift comes amidst a flurry of legislative activity in multiple countries, each grappling with the complexities of AI governance.

In the United States, the regulatory landscape is fragmented, with various states pushing their own rules while the federal government seeks to impose a more uniform approach. This patchwork system complicates compliance for developers and raises questions about the effectiveness of regulations that aim to control the very models that power AI applications. The fundamental issue here is that regulating models is akin to trying to bottle a genie back into its bottle—once a model is released, it can be copied and disseminated at virtually no cost.

Countries like China have already set the stage by implementing AI-specific regulations, focusing on provider accountability and content governance. The European Union is following suit with its AI Act, slated for 2024, which is undergoing updates and simplifications to better address the evolving technology landscape. In India, as of November 2025, the country has tasked its technical advisors with creating a comprehensive AI governance framework that emphasizes ethical deployment.

But the question remains: Can any of these regulations effectively minimize real-world harm? Proponents of a model-centric approach argue that licensing and controlling access to AI models can lead to better oversight. However, as observed, these efforts often end up being performative rather than protective. Once model weights and code are made public, or leaked, they become difficult, if not impossible, to control. This opens the door for non-compliant entities to exploit the technology without any checks in place.

The alternative, as suggested by experts in the field, is to regulate the use of AI. This would require organizations to adopt ethical guidelines, ensuring that their applications do not harm individuals or society. Such an approach acknowledges the rapidity of AI innovation and the impracticality of trying to regulate the underlying technology itself. Instead, it places the onus on developers to implement responsible practices.

One of the challenges that arise from this shift is the risk of stifling innovation. Developers may find themselves hamstrung by regulatory burdens that do not necessarily enhance public safety but slow down the pace of technological advancement. Furthermore, without robust mechanisms for enforcement, even well-intentioned regulations could be easily circumvented, leaving the public vulnerable.

As the conversation continues, it’s crucial to keep an eye on the outcomes of these regulatory attempts. The focus should be on real-world applications and their societal impacts rather than the intricacies of the models themselves. This approach not only encourages responsible usage but also fosters an environment where innovation can flourish without compromising public safety.

We may not have all the answers yet, but the move towards regulating AI use instead of models reflects a growing understanding of how to effectively navigate the complexities of this transformative technology. As the industry continues to evolve, we must remain vigilant, ensuring that regulations serve to protect the public while allowing continued progress in AI development.

Sources

  • Don’t Regulate AI Models. Regulate AI Use

  • Newsletter

    The Robotics Briefing

    Weekly intelligence on automation, regulation, and investment trends - crafted for operators, researchers, and policy leaders.

    No spam. Unsubscribe anytime. Read our privacy policy for details.