Chinese Firms Train AI Doubles of Engineers
By Alexander Cole

Image / technologyreview.com
Bosses are teaching AI to clone their engineers.
Chinese tech workers are watching a real-time test of AI’s ability to replace, or at least augment, human workflows. A viral project called Colleague Skill, created by Tianyi Zhou of the Shanghai Artificial Intelligence Laboratory, promises to distill a coworker’s skills and personality traits into an AI agent that can handle tasks on another person’s behalf. The tool reportedly imports chat history and files from popular workplace apps Lark and DingTalk and then generates reusable manuals detailing that colleague’s duties—and even their quirks—for an AI to replicate. The project began as a stunt, but its resonance touches a deeper trend: bosses urging employees to document workflows so AI agents can automate parts of their jobs.
The project’s arc mirrors a broader moment in China’s tech sector. MIT Technology Review describes a wave of “soul-searching” among enthusiastic early adopters who are navigating an anxious middle ground: AI promises efficiency, yet the idea of digital doubles raises questions about privacy, job security, and what it means to truly know how a coworker works. The Colleague Skill concept centers on building an AI that can answer to a supervisor as if it were the human colleague—an automated stand-in that can type, reason, and retrieve information in the colleague’s voice, at scale.
The workflow is telling. By stitching together a person’s emails, chats, project files, and documented routines, the tool aims to create a modular “know-how” kit that an AI agent can execute. OpenClaw and Claude Code—a pair of AI agents and code-oriented backends—are cited as possible execution engines for turning those manuals into action. The core proposition isn’t purely novelty; it’s a glimpse of what enterprise AI could become: a library of cloned capabilities that can be summoned on demand, freeing teams from repetitive tasks while preserving a recognizable working style.
Analysts and practitioners will want to watch for two dynamics. First, data governance and consent are hard limits. China’s workplace–tech stack is dominated by tools like Lark and DingTalk, and any effort to automate a coworker’s duties at scale tightly intertwines with privacy, data localization, and the unwritten norms of “who owns the digital version of you.” Second, there’s the reliability problem. An AI that mimics a person’s quirks can also imitate errors or biases. A “digital twin” built from chat history and documents may propagate maladaptive patterns if not carefully curated, audited, and bounded by guardrails.
An apt analogy helps: this is like handing a clone of your workday—your routines, shortcuts, and decision shortcuts—over to a robot with a human veneer. It can accelerate routine tasks, but if the clone misreads nuance or slips into bad habits, the consequences compound faster than a single human error.
For practitioners, two concrete takeaways stand out. One, data provenance and scope matter more than raw capability. The value lies in how the knowledge distilled from a person is bounded, labeled, and retrievable, with clear permissions and a life cycle (what can be accessed, for how long, and who can audit or revoke). Two, organizational design and culture will shape adoption. If a project is framed as “replace the worker,” it invites resistance and moral hazard; if framed as “augment the team with a scalable knowledge layer,” it can drive adoption with safer expectations and governance.
What this means for products shipping this quarter is subtle but material. Enterprises will expect AI tools that can capture and reuse tacit knowledge while offering robust privacy controls, explainability, and audit logs. Expect more features around consent, data scoping, and retention, plus governance dashboards that show who defined which rules and how the AI is using company information. In short, the early experiments point toward a future where AI co-workers are a product category—yet only if vendors ship with strong guardrails and transparent limitations.
The MIT Tech Review piece signals a provocative experiment in workforce automation that could accelerate enterprise AI deployments—or stall them, if workers and regulators push back. Either way, the next few quarters will reveal whether digital doubles become a practical productivity tool or a cautionary tale about the speed and scope of AI automation.
Sources
Newsletter
The Robotics Briefing
A daily front-page digest delivered around noon Central Time, with the strongest headlines linked straight into the full stories.
No spam. Unsubscribe anytime. Read our privacy policy for details.