AI Doubles Spark Worker Reconsideration in China
By Alexander Cole

Image / technologyreview.com
Tech workers in China are being told to automate themselves.
A viral GitHub stunt named Colleague Skill has become a flashpoint for how bosses want to reshape the modern Chinese tech stack: teach an AI to imitate a coworker’s routines, then deploy that digital twin to handle tasks and decision nooks that once required a human touch. The project, created by Tianyi Zhou of the Shanghai Artificial Intelligence Laboratory, asks users to pick a colleague, populate a brief profile, and then the AI imports that person’s chat history and files from popular workplace apps like Lark and DingTalk. The result is a “manual” of duties, quirks, and workflows that supposedly can be consumed by an agent such as OpenClaw or Claude Code to act on behalf of the coworker.
The premise is simple yet bold: capture the tacit know-how embedded in a person’s day-to-day work and rinse it through an AI so it can “do” in their stead when needed. In practice, that means everything from responding to routine inquiries to following a preferred sequence of steps for common projects, all distilled into reusable prompts and modules. The project began as a stunt—an ironic commentary on AI-related layoffs and the creeping pressure inside many firms to automate labor up to the point of outsourcing the human element entirely—but it’s landed in an explainer’s wheelhouse for a different reason: managers are actively encouraging staff to document their workflows so AI agents can take over specific tasks.
What matters here isn’t just the novelty of a coworker AI, but what it signals about workplace dynamics in a high-output, highly centralized economy. The MIT Technology Review notes that the Colleague Skill project spread across Chinese social media and sparked conversations about consent, privacy, and the feasibility of truly replicating a person’s decision style through data alone. The technology borrows a familiar playbook from the AI “digital twin” idea—an attempt to create a living model of a person that can be commanded in future contexts. But unlike a clinical twin, this one is built from chat logs, project histories, and the kind of informal know-how that often lives in a single breadboard of a team’s daily routines.
From a practitioner’s lens, several concrete threads emerge. First, access to a steady stream of work artifacts matters more than fancy models: if Lark, DingTalk, and other collaboration tools can export clean, permissioned traces, you can stitch a usable agent profile faster. Second, behavioral fidelity matters as much as technical accuracy: an AI that imitates someone’s response cadence but misreads a workflow can create more risk than it alleviates. Third, privacy and consent are not afterthoughts here, but ongoing design constraints: who owns the AI’s version of “you,” who controls it, and how it’s audited when it makes a misstep? Fourth, the compute and data costs aren’t trivial in enterprise-scale deployments: you’re not training a novelty; you’re provisioning a reusable, task-capable agent that must stay secure, up-to-date, and aligned with corporate policies.
Analysts warn this is an early, imperfect experiment with outsized implications. The Colleague Skill project underscores a broader tension: the more work is codified into an AI’s routines, the more firms risk outsourcing judgment to a system trained on past behaviors. In the near term, expect a wave of pilots in product support, code review, and internal operations where “digital colleagues” can handle repetitive cycles while humans tackle ambiguous, high-stakes decisions. The danger is that a company confuses automation for capability, mistaking smooth pipelines for genuine understanding.
For product teams watching this quarter, the takeaway is not that you should rush to clone every employee, but that you should rethink collaboration boundaries. If a digital twin can reliably handle standardized tasks, you gain speed—but you must design guardrails for privacy, accountability, and talent development. And as organizations explore AI doubles, the real test will be whether these agents augment human work without eroding the sense of workmanship that gives teams their edge.
Analogy: think of Colleague Skill as bottling a coworker’s workflow into a portable recipe—pour it into an AI chef, and you can serve the same dish across teams, locations, and shifts, without ever meeting the original cook. The question is whether the taste remains true when the cook is long gone, or if it turns into something that serves the machine more than the human behind it.
Sources
Newsletter
The Robotics Briefing
A daily front-page digest delivered around noon Central Time, with the strongest headlines linked straight into the full stories.
No spam. Unsubscribe anytime. Read our privacy policy for details.