Skip to content
MONDAY, APRIL 20, 2026
AI & Machine Learning3 min read

Chinese tech workers train AI doubles

By Alexander Cole

hands hold a rice bowl as digital grains are pulled away into the air

Image / technologyreview.com

Bosses want workers to automate themselves into AI avatars.

Tech workers in China are being nudged to distill their workflows into AI agents that can act on their behalf, a provocative push that’s sparking both curiosity and unease. A viral GitHub project called Colleague Skill—presented as a stunt but resonating with real anxieties—lets users map a coworker’s tasks, quirks, and decision patterns into an AI proxy that could, in theory, handle parts of their job. The project imports chat histories and files from popular workplace apps like Lark and DingTalk and spins out reusable manuals that describe duties and idiosyncrasies for an AI agent to imitate. The effort was created by Tianyi Zhou, an engineer at Shanghai Artificial Intelligence Laboratory, who told Southern Metropolis Daily the idea began as satire in response to AI-related layoffs and a rising tide of companies asking employees to automate themselves.

What makes Colleague Skill striking is not its polish, but its nerve. In a country where enterprise AI adoption is accelerating, the project highlights a real appetite for “digital coworkers”—AI agents that can carry forward someone’s known methods, shortcuts, and style. The viral moment came from the paradox: tools exist to distill knowledge, yet the act signals a workplace where tacit know-how could be codified, outsourced, or even replaced by software. Tech workers who spoke with MIT Technology Review described a mixed mood—excited by the prospect of more efficient workflows, wary of job insecurity and the ethics of turning colleagues into algorithmic replicas.

From a practical standpoint, Colleague Skill points to a broader trend: the rise of “AI doubles” as a feature in enterprise tooling. If a coworker’s processes can be codified, you can design assistants that respond in their pattern, defer to their preferred methods, or prefill tasks in shared workstreams. But turning that into a trustworthy product is far from trivial. Fidelity matters—will the AI replicate a collaborator accurately, including their caveats and exceptions? Will it honor the person’s privacy and consent when importing private chats and documents? And how will organizations audit the outputs when the line between human judgment and machine replication blurs?

Here are two to four concrete practitioner takeaways for teams watching this space unfold:

  • Governance and consent matter more than ever. Automatically importing chat histories and files raises privacy and employment-law questions. Any viable implementation will need explicit consent, clear data provenance, and robust controls to limit what can be copied or used to train an agent.
  • Fidelity vs. reliability is a tricky tradeoff. A highly personalized proxy can speed up routine tasks, but misrepresenting a colleague’s judgment or quirks can backfire, especially in high-stakes decisions. Product teams should favor transparent observability and explicit disclaimers about when and how an AI agent should defer to human review.
  • Data integration hurdles are real. The Colleague Skill workflow relies on connecting enterprise apps like Lark and DingTalk. In practice, building a dependable “digital coworker” requires disciplined data schemas, versioning, and provenance to prevent stale or conflicting signals from seeping into the agent’s behavior.
  • The notion of AI doubles reshapes work design, not just automation. If adopted seriously, teams will need to rethink roles, accountability, and team norms. Digital replicas could alter how performance is measured and how collaboration happens—posing both productivity gains and cultural risks.
  • What this suggests for products shipping this quarter: expect a wave of enterprise features that help teams capture tacit knowledge in a privacy- and governance-friendly way, with strong emphasis on consent, auditability, and fail-safes. Early demonstrations like Colleague Skill show the allure of “digital colleagues,” but the real product challenge is building reliable, transparent, and ethically sound tools that can be used with consent across organizations—not merely spoofing a coworker’s workflow for a joke.

    Ultimately, Colleague Skill isn’t a finished product; it’s a spotlight on a pending shift in how companies think about knowledge, automation, and the boundaries between human and machine workflows.

    Sources

  • Chinese tech workers are starting to train their AI doubles–and pushing back

  • Newsletter

    The Robotics Briefing

    A daily front-page digest delivered around noon Central Time, with the strongest headlines linked straight into the full stories.

    No spam. Unsubscribe anytime. Read our privacy policy for details.