Skip to content
MONDAY, MAY 4, 2026
Analysis3 min read

Redefining Who Builds AI

By Jordan Vale

A new taxonomy will redefine who actually builds AI.

A Georgetown policy note outlines a precise way to count the AI workforce, arguing that government strategy, corporate training, and visa policies hinge on whether we classify roles as AI development work or AI-adjacent activity. The core idea is that "AI development jobs" are those that directly contribute to the technical development of AI systems, while a broader set of roles tangential to AI, such as data labeling, policy, sales, and other support functions, sits outside that core definition. The distinction matters because it changes what counts as a talent shortage, what kinds of training programs governments should fund, and how job markets are measured.

Policy debates over AI workforce readiness have long suffered from fuzzy terminology. The Georgetown Center for Security and Emerging Technology argues that official labor statistics do not neatly capture who is building AI as it evolves. Without a clear taxonomy, planners risk either inflating the pool of AI workers or undercounting the real people who push models from concept to deployment. By focusing on roles that directly develop AI systems, the new framework aims to sharpen policy levers, from funding for training pipelines to immigration policies designed to attract specialized talent.

Industry observers say the taxonomy could have immediate practical effects. For policymakers, the taxonomy provides a vocabulary to target education and training dollars toward the kinds of roles that actually construct AI systems, rather than broad, AI-adjacent categories. For employers, the approach helps align recruiting and workforce planning with policy objectives, clarifying which positions qualify for incentives or streamlined visa processes and which do not. For researchers and analysts, the framework promises more apples-to-apples measurement across times and jurisdictions, a longstanding hurdle in AI labor studies.

Yet the approach is not without caveats. Job postings data, the authors note, can be noisy and lag behind fast-moving technology teams that reorganize around new platforms or workflows. If governments rely too heavily on postings without triangulating with other data, such as traditional occupational classifications or company-level role definitions, there is a danger of misclassifying roles or missing emerging functions like ML engineering, platform teams, or model verification work. Analysts add that a rigid taxonomy might also risk freezing evolving practices into static boxes, so ongoing revision and crosswalks to existing standards will be essential.

From a sector perspective, the shift could affect how AI vendors and public-sector buyers frame their own talent narratives. If the taxonomy becomes a common reference point, it could influence how firms market AI roles to potential hires, how training ecosystems are funded, and how cross-border talent flows are measured against national AI ambitions. In a climate where nations compete for AI leadership, clarity about what counts as core AI development work offers a practical tool for benchmarking capabilities and gaps.

Two practical takeaways stand out for the near term. First, policy documents show that precise definitions matter for resource allocation. Governments will likely push more focused investments in core AI development roles, with the expectation that strengthening these positions accelerates deployment of safe and effective AI systems. Second, the taxonomy highlights a need for triangulated data streams. Relying solely on job postings will not suffice; robust policy design will require integrating the new definition with existing labor statistics and industry surveys to avoid misreads of demand and supply.

In short, the new AI workforce taxonomy from Georgetown’s policy researchers does not prescribe a regulation, but it does recalibrate how we count and fund the talent needed to build AI. For compliance officers, policy teams, and talent leaders, the message is clear: definitions drive decisions, and definitions are now more finely tuned to reflect what actually creates AI technology.

Sources

  • Defining the AI Workforce

  • Newsletter

    The Robotics Briefing

    A daily front-page digest delivered around noon Central Time, with the strongest headlines linked straight into the full stories.

    No spam. Unsubscribe anytime. Read our privacy policy for details.