AI’s Growth Curve: No Wall Ahead
By Alexander Cole

Image / technologyreview.com
AI training data has exploded a trillionfold, and the wall still hasn’t shown up.
Mustafa Suleyman’s argument in Technology Review is a blunt counterpoint to the wall-chasing skeptics: frontier AI models are riding an exponential surge in both data and compute, a ramp that simply keeps advancing even as pundits cry “this will end.” He frames the trend as a forcible departure from the old, linear intuition that served us on the savannah. If you walk for an hour you cover a given distance; double the time and you double the distance. In AI, the increases in training data and FLOPs have grown far faster than that intuition can handle, and the consequences ripple through every aspect of product building—from governance to cost to governance again.
The paper’s core claim is stark: the amount of compute and data behind leading models has grown by orders of magnitude—what Suleyman describes as a roughly trillion-fold acceleration, moving from the scale of early systems around 10^14 FLOPs to well beyond 10^26 FLOPs in the largest models. The upshot isn’t just bigger models; it’s a shift in the entire training paradigm. Skeptics have pointed to energy limits, data bottlenecks, and the slowing of Moore’s Law, but the author argues the exponential forces driving frontier AI are broader and more resilient than those critiques suggest. The result, in practical terms, is a continued escalation of what “frontier” means in real-world use.
The piece leans on a vivid turn of phrase: if the old era trained more calculators in a room, the new era trains a factory floor of data, software, and specialized hardware that operates at industrial tempo. The bottlenecks now aren’t simply “can we fit more GPUs in a rack?” They are how to efficiently harness, align, and govern a system whose scale outpaces traditional software engineering playbooks. In other words, the physics of scale has learned new levers—data pipelines, distributed training, and tooling that keep circles of researchers from waiting on a single bottleneck.
For product teams, the implications are both exciting and nerve-wracking. If the current trajectory holds, the next generation of copilots and agents could offer capabilities that were previously the domain of enterprise research labs: deeper reasoning, broader knowledge, and more reliable tool use. Yet the cost of training and the energy footprint rise in tandem with capability, creating a practical ceiling for many startups and smaller teams. Expect inference budgets to matter as much as model size; latency, reliability, and safety engineering become competitive differentiators, not afterthoughts.
Two concrete practitioner takeaways emerge from Suleyman’s frame:
Analogy helps: the leap from “more calculators” to a “factory floor” mirrors the shift from ad hoc optimizations to industrial-scale pipeline engineering. It’s not just bigger; it’s a new operating system for AI development, where parallelism, orchestration, and governance workflows become as important as the model architecture itself.
But there are caveats. The exponential tide can create false security: faster progress can outpace evaluation, alignment, and safety checks. Without rigorous, real-world testing and risk controls, larger models can amplify hallucinations or misgeneralizations despite impressive benchmarks. And while Suleyman’s argument emphasizes the availability of compute, it doesn’t erase the material constraints: access to data, energy prices, and the cost of high-quality curation remain material constraints for teams racing to ship.
If the thesis holds, this quarter’s product bets should tilt toward scalable, cost-aware deployments that balance capability with governance and reliability. Teams should anticipate continued leaps in what AI can do, but guardrails—cost modeling, data governance, and robust evaluation—will determine who can ship responsibly at scale.
Sources
Newsletter
The Robotics Briefing
Weekly intelligence on automation, regulation, and investment trends - crafted for operators, researchers, and policy leaders.
No spam. Unsubscribe anytime. Read our privacy policy for details.