AI in the crosshairs: strikes debated
By Alexander Cole

AI could steer a strike; ethics lag behind.
The latest issue of MIT Technology Review’s The Download threads together two very different noises—the Earth’s own rumblings and the potential use of AI in military targeting—into a single, unsettling theme: machine intelligence is no longer content to stay in the lab. It’s creeping into the decision loops that determine life-and-death actions, and the guardrails are barely keeping pace.
On one track, the newsletter riffs on infrasound—the sub-20 Hz vibrations that travel around the globe, echoing events like glaciers calving or storms in the oceans. The story invites readers to hear what has long been invisible to human ears, a reminder that data can be sensed, mapped, and interpreted in ways that reveal the planet’s ongoing drama. In practice, these signals fuel better disaster response, climate research, and resource monitoring by turning the planet’s faint murmurs into actionable intelligence. It’s a vivid analogy for what contemporary AI can do with messy real-world data: convert low-signal, high-impact phenomena into decision-ready insights.
The second thread is more provocative: AI for strikes on Iran. The piece doesn’t pretend this is routine or risk-free. It flags a real dilemma facing policy makers and defense contractors alike—that AI systems, when linked to real-time intelligence and kinetic options, compress timescales and raise the stakes of every misinterpretation. The same tools that can forecast a weather pattern or flag anomalies in a fleet could, if misapplied, amplify escalation, misclassify targets, or trigger unintended consequences without enough human judgment.
Two takeaways stand out for practitioners who build or buy defense-relevant AI, or who design the governance that makes these systems safe to deploy. First, the bottleneck isn’t merely model accuracy or speed. It’s the governance layer that decides when an AI’s recommendation merits action, and how to override it when signals are ambiguous or biased. The paper’s framing suggests that the core risk isn’t that AI will be wrong in the abstract, but that it will be trusted too quickly in a high-stakes context without sufficient checks, balances, and explainability. In practice, that means hardening human-in-the-loop protocols, defining escalation ladders, and building verifiable audit trails that survive attempts to tamper with data or the chain-of-command.
Second, the data-and-compute plumbing matters as much as the model itself. Real-world decision support for defense relies on multi-source feeds—satellite intel, signals intelligence, human intelligence, open-source data—and those streams must be shielded against adversarial manipulation. That implies strong data provenance, certifyable data pipelines, tamper-evident logging, and robust fail-safes if any component of the chain shows anomalies. For product teams, the lesson is clear: if you’re wiring AI into operational decision loops, you’re also committing to a whole stack of governance, security, and ethics controls that extend far beyond model code.
A few practical implications to watch next:
The essence here is not sensationalism but a sobering recalibration: AI’s reach has moved from the lab to the decision desk, and the ethical and operational scaffolding has to move with it. The Earth’s quiet signals won’t scare anyone into complacency, but the noisy, consequential uses of AI in geopolitics just might.
Sources
Newsletter
The Robotics Briefing
Weekly intelligence on automation, regulation, and investment trends - crafted for operators, researchers, and policy leaders.
No spam. Unsubscribe anytime. Read our privacy policy for details.