Skip to content
TUESDAY, APRIL 21, 2026
AI & Machine Learning3 min read

AI doubles spark pushback in Chinese tech

By Alexander Cole

hands hold a rice bowl as digital grains are pulled away into the air

Image / technologyreview.com

Bosses want AI doubles; workers push back.

In China, a spoof turned into a serious conversation about the future of work: a GitHub project called Colleague Skill promises to distill a coworker’s workflows, personality quirks, and tasks into an AI agent that can do the job. The idea sounds straight out of a sci‑fi novel, but MIT Technology Review reports that it struck a nerve among tech workers who say bosses are increasingly nudging staff to document their workflows for automation—and to let AI agents mimic them.

The project, created by Tianyi Zhou of the Shanghai Artificial Intelligence Laboratory, asks a user to name the colleague to be replicated and supply basic profile details. It then imports chat histories and files from popular enterprise chat platforms like Lark and DingTalk and generates “manuals” describing duties and even idiosyncrasies. The goal, the author suggests, is to give an AI agent a portrait of the coworker so it can imitate how they work. It’s billed as a stunt, but its reach has gone far beyond a private GitHub repo, fueled by a wider discourse in China about automation, layoffs, and the call for employees to “automate themselves” before someone else does.

In China’s fast-moving tech scene, the meme collided with real workplace pressures. The spoofish project circulated on social networks, becoming a lens through which workers debated whether such AI doubles were a path to efficiency or a threat to job security. The MIT piece quotes several engineers who say their employers are encouraging them to detail their routines and decision logs so that an AI could step into routine tasks, at least on a first pass. The conversation is not merely about convenience; it touches on deeper questions about consent, privacy, and the authenticity of a coworker’s judgment when handed over to a machine.

Analysts say this moment isn’t isolated to China. Across the globe, teams are increasingly experimenting with “digital colleagues” that can offload repetitive tasks or provide a first draft of decisions. But the Chinese episode is notable for how quickly a town-square meme has become a lens on policy, culture, and governance inside tech firms. The Southern Metropolis Daily interviewed Zhou, who said the project began as a stunt in response to AI-related layoffs and the trend of prompting employees to automate themselves. The rapid spread of Colleague Skill underscores both a curiosity about AI’s potential and a discomfort with the moral and practical implications of cloning a living coworker into a software agent.

Practitioner takeaway: codifying a coworker’s tacit knowledge into a formal “manual” can speed onboarding and handover dramatically, but it risks freezing outdated practices or misrepresenting a person’s judgment if the AI is used beyond narrowly defined tasks. Practically, teams will need robust versioning, clear consent, and explicit boundaries around what an AI double is allowed to do.

Another takeaway: data governance is non-negotiable. Pulling in chat histories and files from Lark and DingTalk raises privacy and IP concerns, especially when the resulting agent could operate on sensitive work streams. Enterprises will need policies about data access, retention, and opt-in, plus safeguards to prevent impersonation or misattribution of a coworker’s decisions.

A third angle: the technology isn’t magic yet. The idea of a “doppelganger” AI depends on reliable, up-to-date inputs and tight integration with workplace tools. If the instructions drift or if the AI lacks domain nuance, the collaborator could produce sloppy or erroneous outcomes, eroding trust rather than saving time.

Fourth, and crucially, this is a moment for product teams and startups: if you’re shipping AI-enabled workflows this quarter, it’s a reminder to pair automation with strong governance, transparent user signals, and clear responsibility boundaries. The appeal of “smaller, cheaper, faster” digital doubles is compelling, but the execution must respect people, privacy, and the limits of automation in knowledge work.

In short, Colleague Skill didn’t just spoof a sci‑fi premise; it catalyzed a real industry debate about whether our best work is best captured, audited, and delegated to AI—and who owns that decision when the machine speaks with a coworker’s voice.

Sources

  • Chinese tech workers are starting to train their AI doubles–and pushing back

  • Newsletter

    The Robotics Briefing

    A daily front-page digest delivered around noon Central Time, with the strongest headlines linked straight into the full stories.

    No spam. Unsubscribe anytime. Read our privacy policy for details.