Skip to content
TUESDAY, MARCH 17, 2026
AI & Machine Learning3 min read

AI Targeting: Humans Vet the Robot Picks

By Alexander Cole

Robot head with artificial intelligence display

Image / Photo by Andrea De Santis on Unsplash

AI chatbots could rank strike targets, but humans would still approve.

A Defense Department official said the military is exploring a workflow where a generative AI system could ingest lists of potential targets in classified settings, analyze them, and prioritize which targets to consider first. The ultimate decision would still rest with human operators who review and validate the AI’s recommendations, accounting for factors like aircraft locations and other battlefield realities. In public remarks that described a possible path forward without confirming current practice, the official pointed to a scenario in which state‑of‑the‑art chatbots would assist—rather than replace—the human decision chain.

The disclosure arrives as the Pentagon faces scrutiny over a strike linked to allegations about an Iranian school, adding relevance to how automated tools might influence high-stakes decisions. The official noted that several major AI vendors—OpenAI, xAI, and others—have contracts or dialogues with the United States government about deploying models in classified environments. In theory, models such as OpenAI’s ChatGPT and xAI’s Grok could power this kind of workflow, the official suggested, while reiterating that the specifics of current deployments remain classified and are not being publicly confirmed.

Anthropic’s Claude has also been cited by outlets as having been integrated into military AI systems and used in operations in Iran and Venezuela, though the official stopped short of detailing whether this exact target-ranking workflow is in operation. The comments nonetheless illuminate a trajectory in which generative AI moves from experimental chatbots to a tool embedded in the decision loop for sensitive, strategic choices.

Analysts and defense researchers say the pivot is plausible—but fraught. The value proposition is clear: speed and broad situational synthesis across disparate data streams. The risk, however, is equally clear. Relying on a model to rank targets raises the specter of misidentification, bias, or algorithmic blind spots when civilian harm or legal constraints are at stake. In practice, the human‑in‑the‑loop model is meant to curb those risks, but it introduces its own exposure: decision latency, overreliance on automated outputs, and the potential for automation bias where operators defer to the machine’s ranking even when it misses nuance on the ground.

Think of it like a navigator on a fighter jet: the AI provides the fastest possible route, screening dozens of variables in the blink of an eye, but the pilot—armed with training, rules of engagement, and real‑time weather—must approve the path before committing. The push toward chatbot‑assisted targeting could shave precious minutes from planning cycles, but it also creates a new surface for failure modes: prompts that elicit overconfident conclusions, data that is incomplete or noisy, and the challenge of auditing AI reasoning in a context where missteps have grave consequences.

Two practical takeaways for teams watching this space: first, human‑in‑the‑loop remains essential. Any deployment will require rigorous red‑teaming, adversarial testing, and clear escalation pathways so that AI suggestions are not misinterpreted as orders. second, governance and data security are non‑negotiable. The chain from data input to target ranking must be auditable, with tightly controlled access, robust prompt containment, and attestable decision logs to satisfy civilian‑law and international humanitarian considerations.

What to watch next in the coming months: how the Pentagon formalizes safeguards for AI‑assisted targeting, what kinds of audits and after‑action reviews accompany any use, and how public and international scrutiny shapes limits on deploying such systems in combat environments. If these workflows ever scale, they’ll redefine not just who makes decisions, but how quickly they must be explainable—and held accountable.

Sources

  • A defense official reveals how AI chatbots could be used for targeting decisions

  • Newsletter

    The Robotics Briefing

    Weekly intelligence on automation, regulation, and investment trends - crafted for operators, researchers, and policy leaders.

    No spam. Unsubscribe anytime. Read our privacy policy for details.