Skip to content
WEDNESDAY, MARCH 11, 2026
Humanoids3 min read

Attention Under Siege: Haidt Warns AI May Accelerate Decline

By Sophia Chen

Attention Under Siege: Haidt Warns AI May Accelerate Decline illustration

Your attention is dying—and AI will finish the job.

MIT News reports that social psychologist Jonathan Haidt used the Compton Lectures to cast a stark portrait of how personal tech, especially smartphones and social media, is eroding cognition, civic life, and child wellbeing. Haidt, the Thomas Cooley Professor of Ethical Leadership at NYU, argues the damage is already measurable and spreading, with “around the world, people are getting diminished.” He sums up the trend in three blunt terms: people are becoming less intelligent, less happy, and less competent—and he says the deterioration is happening very fast. The tension, he notes, isn’t just in the current devices but in the trajectory as AI technologies begin to permeate everyday life and decision making. “If we continue with current trends as AI is coming in, it’s going to accelerate,” he warns, framing AI not as a distant luxury but as a force multiplier for the very problems he outlines.

Haidt places particular emphasis on the social fabric—how communities, deliberation, and democratic participation are shaped by attention and distraction. He has long focused on the consequences for younger generations, including the harms he attributes to rising anxiety and depression among young women, a theme he discusses in his recent book, The Anxious Generation. But his argument expands beyond mood metrics to a broader cognitive decline: a widening gap in the capacity to concentrate, reason, and filter information in an era of perpetual feeds and rapid feedback loops. The destruction of the human capacity to pay attention, he argues, is the core bottleneck that makes everything else harder—learning, judgment, civic engagement, and even interpersonal trust.

For the field of humanoid robotics and intelligent agents, the talk lands with practical implications. As robots increasingly share workspaces with people and supplement education, care, and entertainment, the UX design and interaction budget around attention becomes a strategic constraint rather than a nice-to-have. Demonstration footage from numerous labs in recent years has shown impressive locomotion and dexterity, but Haidt’s framing asks a harder question: are our devices, and by extension our robots, training people to think and act in ways that degrade self-regulation? The takeaway for engineers and investors is to pair performance milestones with attention-aware safeguards.

Two concrete practitioner insights emerge from translating Haidt’s concerns into robotics practice. First, attention management must be a design constraint, not an afterthought. Interfaces, whether on a social robot, a remote healthcare assistant, or an educational droid, should minimize unnecessary cognitive load, present information in digestible chunks, and encourage reflective pauses rather than compulsive interaction. Second, there is a responsibility to build “attention resilience” into systems. That means features that help users regulate engagement—clear opt-out mechanisms, intentional pacing in conversations, and prompts that encourage longer-form thinking over short, reactive replies. In education or therapeutics, robotic tutors should avoid reinforcing constant novelty-seeking and instead balance novelty with opportunities for sustained, deep-learning sessions.

The broader industry context should not be lost in caution. Haidt’s critique is part of a growing discourse about the social costs of consumer technology, and it dovetails with calls for more transparent AI, better media literacy, and design that respects long-term well-being. In robotics terms, the real risk isn’t just devices that perform well in isolation; it’s systems that compound attention-straining behaviors at scale. The counterbalance is clear: advance capabilities while tightening the feedback loop to human cognition, so the sum effect is not a race to shorter attention spans but a collaboration that supports thoughtful work, learning, and civic participation.

MIT’s framing—that the acceleration of attention decline could ride alongside accelerating AI—puts a timely pressure on research, funding, and product teams to prove they can deliver benefits without hollowing out the very skillsets we rely on for complex, long-horizon tasks in robotics and automation.

Sources

  • Personal tech, social media, and the “decline of humanity”

  • Newsletter

    The Robotics Briefing

    Weekly intelligence on automation, regulation, and investment trends - crafted for operators, researchers, and policy leaders.

    No spam. Unsubscribe anytime. Read our privacy policy for details.