A New Taxonomy for AI Jobs Emerges
By Jordan Vale
A precise taxonomy for AI development jobs has just landed.
Georgetown's Center for Security and Emerging Technology has unveiled a targeted framework to distinguish the people who actually build AI systems from the broader set of AI related roles. The aim is to cut through the fog about what counts as AI work, a distinction policymakers say is essential for shaping training programs, funding, and immigration policies that keep pace with fast moving technology. The core idea is simple, but its implications are wide: define AI development jobs as those roles that directly contribute to the technical development of AI systems, and measure demand using job postings data to avoid the vagaries of ad hoc labeling.
The new approach, described in a Georgetown blog, highlights a stubborn problem in many policy debates: official labor statistics do not neatly capture AI work as a distinct category. Without a clear taxonomy, it is easy to misjudge demand, misallocate training dollars, or misinterpret who is being hired as AI expands across industries. By drawing a sharper line between development work and AI adjacent activities, the framework aims to help governments, educators, and firms align scarce talent with actual needs. In practice, this means policymakers can better forecast shortages, and training providers can tailor curricula to the precise competencies developers must possess to build and improve AI systems.
Industry observers say the framing matters more than ever as AI becomes a standing item on national agendas. The taxonomy rests on two pillars: a precise definition of AI development jobs and a measurement method built from job postings data. The latter is crucial because it offers a near real-time pulse on demand, potentially bypassing lags in traditional statistics that can obscure rapid shifts in skill requirements, toolchains, and model types. Yet observers caution that job postings are not a perfect proxy. Posting language varies by firm, geography, and the stage of a project, and there is a risk of undercounting roles that do not explicitly label themselves as AI development despite their direct contribution to AI work.
For practitioners, the shift offers concrete incentives and warnings. Policymakers now have a clearer target for workforce planning and immigration policy designed to attract talent with AI development capabilities. Employers can use the taxonomy to sharpen hiring criteria, ensuring they look for the precise technical competencies that move a project from prototype to production. Educators and training providers gain a frame to build or update curricula around defined AI development skill sets, rather than chasing a moving target of what might be labeled as AI. Workers themselves can map career paths more clearly, distinguishing roles that lead to hands on AI development from broader, AI related positions that may require different training tracks.
Four practitioner insights crystallize from this shift:
What to watch next is as important as what has already been defined. If policymakers elevate this taxonomy as a standard, we may see broader adoption in national statistics and cross-country comparisons, enabling more coherent talent strategies. But the dynamic nature of AI means the taxonomy will need ongoing refinement, with attention to how emerging disciplines such as AI safety, model auditing, and responsible deployment fit into the development versus adjacent spectrum.
Sources
Newsletter
The Robotics Briefing
A daily front-page digest delivered around noon Central Time, with the strongest headlines linked straight into the full stories.
No spam. Unsubscribe anytime. Read our privacy policy for details.