
From Foundation Models to the Operating Room: How Humanoids, Simulation and Telesurgery Are Converging
By Sophia Chen
In labs from San Francisco to Seoul, teams are stitching three technologies together - large-scale robot intelligence, hyper-realistic simulation, and remote-enabled surgical arms - to move robots out of test benches and into hospitals. The result: faster validation cycles, cross-border operations, and a new set of safety questions that regulators and health systems must answer this year.
Physical-intelligence startups, digital-twin firms and surgical-telemedicine companies closed headline funding rounds in November 2025, signaling capital is following the technical pipeline that links learning models, software-in-the-loop testing and live clinical use. Physical Intelligence raised $600 million in a Series B to scale foundation models that translate vision and motion into joint commands; Parallax Worlds secured $4 million to stress-test robots in virtual replicas of factories and clinics; Sovato Health, which helps enable telesurgery, recently closed a $26 million round that included investment from Intuitive Surgical.
Why this moment matters for humanoid and surgical robotics
Investment and product roadmaps are aligning. Physical Intelligence’s November 2025 Series B, led by CapitalG and Lux Capital, values the company at about $5.6 billion and brings its total funding to roughly $1.1 billion; the round is explicitly intended to collect more real-world data and deploy larger vision-language-action models across robots. That flow of capital lets research-grade models move toward production-grade runtimes that must meet industrial safety envelopes.
From pixels to joint torques: what foundation models add
At the same time, surgical robotics still has enormous headroom. A recent industry forecast cited by The Robot Report projects the global surgical-robotics market will double by 2029; today, only roughly 2.5% of the 10 million major operating-room procedures performed annually in the United States are robotic-assisted. Those numbers explain why both legacy medical-device companies and startups are designing devices to be "telesurgery-native," embedding remote capability from first release rather than adding it later.
Simulation and digital twins: compressing validation from years to weeks
The technical bridge between perception and safe motion is smaller but harder than it looks. Physical Intelligence describes its VLA (vision-language-action) models as 3- to 5-billion-parameter transformers that tokenize RGB-D streams and short motion histories, then predict the next 50 steps in roughly 100 milliseconds. A hardware-abstraction layer converts those tokens into robot-specific joint commands with explicit force and speed limits, which is essential for meeting safety envelopes on humanoids and surgical arms.
That architecture matters because it provides predictable timing and bounded outputs. The company reports that, using RECAP-an approach that combines reinforcement learning with demonstration and corrective coaching-throughput doubled and failure rates fell on tasks such as inserting a coffee filter, folding previously unseen laundry, and assembling a cardboard box, compared with imitation learning alone. For clinical teams, predictability and repeatability are prerequisites for any device that will interact with people under anesthesia or in constrained anatomy.
Regulatory and clinical friction: the practical safety questions
Simulation and digital twins: compressing validation from years to weeks
A persistent bottleneck for humanoid and surgical systems is validation in realistic environments. Parallax Worlds’ pitch is explicit: convert simple video walkthroughs into high-fidelity, interactive 3D twins and run a robot’s real control software inside those simulations. The company says that capability turns what used to take years of expensive on-site iteration into weeks of virtual iteration, and it has already signed five robotics customers spanning manufacturing and construction.
Who wins, who loses and how deployments will look
Simulation does two distinct jobs for medical robotics. First, it expands the set of edge cases that can be tested deterministically: rare device failure modes, network latency spikes during telesurgery, or instrument collisions in narrow fields. Second, it allows mixed-reality regulatory artifacts-reproducible logs, replayable scenarios and system-in-loop telemetry-that device makers can present to auditors and hospital risk committees when seeking 510(k) clearance or institutional credentialing.
Regulatory and clinical friction: the practical safety questions
Telesurgery is not just a network problem; it is a systems-engineering problem that spans device design, cybersecurity, human factors and hospital workflows. Sovato Health’s co-founder and CEO Cynthia Perazzo put it bluntly: "Remote surgeries and procedures are inevitable." The company has been publishing technical guidelines co-authored with cybersecurity teams and device partners to define what safe, scalable remote procedures require. In the United States, Perazzo notes, the robot maker will need to include the telesurgery use case in its 510(k) submission or file amendments with supporting evidence.
Sources
- Physical Intelligence raises $600M to advance robot foundation models - The Robot Report, 2025-11-25
- Parallax Worlds raises funding for hyper-realistic digital twins to test robots - The Robot Report, 2025-11-25
- Sovato CEO says big telesurgery advances are coming soon - The Robot Report, 2025-11-25
- Surgical robotics market to double by 2029: report - The Robot Report, 2025-11-28