Skip to content
FRIDAY, MAY 1, 2026
Analysis3 min read

Defining the AI workforce reshapes policy

By Jordan Vale

Georgetown researchers have issued a clear compass for policymakers by separating AI development work from the rest of the AI buzz.

A new methodology from the Center for Security and Emerging Technology, or CSET, aims to stop the term AI from acting as a catchall. It defines AI development jobs as roles that directly contribute to the technical development of AI systems and pairs that with a distinct category of AI-adjacent work. The point is simple but consequential: when governments plan training programs, immigration rules, or funding, they need a precise taxonomy instead of an umbrella label that swallows everything from model building to data labeling.

The push matters because official labor statistics do not neatly capture AI work today. CSET notes that without a clear taxonomy, it is hard to answer basic questions about who is actually building AI and what kind of skills are in demand. The new approach uses job postings data as a proxy for demand, offering a more timely signal than traditional occupation classifications. In practice, that means policy analysts could track shifts in demand for AI development roles as technology evolves, rather than relying on slow, broad categories that miss emerging technical work.

This move comes with a practical promise and a set of tensions. On the one hand, policymakers could more accurately target education and training pipelines to the roles most central to AI development, and design immigration or mobility policies to attract the right specialists. On the other hand, creating rigid categories can lock in a snapshot of a fast moving field. The risk is undercounting hybrid roles that blend software engineering with data science, or missing new specialties that arise as models get more capable and applications more diverse.

Several practitioner tensions stand out. First, definitional choices matter. How you classify a job shape not only who gets trained but which programs count as investments in the AI workforce and which funding streams flow to them. Second, measuring with job postings data introduces biases and lag. Postings reflect demand at a moment in time and can be noisy about actual supply, regional concentrations, and the precision of role titles used by employers. Third, the boundary between AI development and AI-adjacent work is porous. Roles that surface in governance, safety, or product management can demand deep technical insight while not fitting neatly into a single bucket. Finally, there is the question of adoption. Will governments and agencies align with this taxonomy, or will they preserve their own preferred classifications for budgeting and reporting?

Looking ahead, observers should watch for two developments. One, how this taxonomy is adopted by official statistics bodies or government agencies and whether it feeds into consolidated occupational data. Two, whether international partners build comparable taxonomies so cross-country workforce planning becomes more coherent. The next phase will likely test the model against real policy decisions, revealing how sharply a precise AI workforce definition can translate into training initiatives, funding allocations, and strategic immigration policies.

In the end, the effort is less about rebranding the wind than about steering the policy engine toward the people building the AI systems. If the taxonomy holds, it could sharpen not just workforce planning but also the conversations around which skills governments must cultivate to stay competitive in a rapidly evolving AI era.

Sources

  • Defining the AI Workforce

  • Newsletter

    The Robotics Briefing

    A daily front-page digest delivered around noon Central Time, with the strongest headlines linked straight into the full stories.

    No spam. Unsubscribe anytime. Read our privacy policy for details.