Chinese tech workers train AI doubles
By Alexander Cole

Image / technologyreview.com
Bosses want workers to automate themselves into AI avatars.
Tech workers in China are being nudged to distill their workflows into AI agents that can act on their behalf, a provocative push that’s sparking both curiosity and unease. A viral GitHub project called Colleague Skill—presented as a stunt but resonating with real anxieties—lets users map a coworker’s tasks, quirks, and decision patterns into an AI proxy that could, in theory, handle parts of their job. The project imports chat histories and files from popular workplace apps like Lark and DingTalk and spins out reusable manuals that describe duties and idiosyncrasies for an AI agent to imitate. The effort was created by Tianyi Zhou, an engineer at Shanghai Artificial Intelligence Laboratory, who told Southern Metropolis Daily the idea began as satire in response to AI-related layoffs and a rising tide of companies asking employees to automate themselves.
What makes Colleague Skill striking is not its polish, but its nerve. In a country where enterprise AI adoption is accelerating, the project highlights a real appetite for “digital coworkers”—AI agents that can carry forward someone’s known methods, shortcuts, and style. The viral moment came from the paradox: tools exist to distill knowledge, yet the act signals a workplace where tacit know-how could be codified, outsourced, or even replaced by software. Tech workers who spoke with MIT Technology Review described a mixed mood—excited by the prospect of more efficient workflows, wary of job insecurity and the ethics of turning colleagues into algorithmic replicas.
From a practical standpoint, Colleague Skill points to a broader trend: the rise of “AI doubles” as a feature in enterprise tooling. If a coworker’s processes can be codified, you can design assistants that respond in their pattern, defer to their preferred methods, or prefill tasks in shared workstreams. But turning that into a trustworthy product is far from trivial. Fidelity matters—will the AI replicate a collaborator accurately, including their caveats and exceptions? Will it honor the person’s privacy and consent when importing private chats and documents? And how will organizations audit the outputs when the line between human judgment and machine replication blurs?
Here are two to four concrete practitioner takeaways for teams watching this space unfold:
What this suggests for products shipping this quarter: expect a wave of enterprise features that help teams capture tacit knowledge in a privacy- and governance-friendly way, with strong emphasis on consent, auditability, and fail-safes. Early demonstrations like Colleague Skill show the allure of “digital colleagues,” but the real product challenge is building reliable, transparent, and ethically sound tools that can be used with consent across organizations—not merely spoofing a coworker’s workflow for a joke.
Ultimately, Colleague Skill isn’t a finished product; it’s a spotlight on a pending shift in how companies think about knowledge, automation, and the boundaries between human and machine workflows.
Sources
Newsletter
The Robotics Briefing
A daily front-page digest delivered around noon Central Time, with the strongest headlines linked straight into the full stories.
No spam. Unsubscribe anytime. Read our privacy policy for details.