Teleoperation-as-a-Service (TaaS)
Your First Home Robot May not Be Autonomous
This blog dicusses deploying, scaling data, and winning the robotics startup game.
TLDR:
- Teleoperation-as-a-Service (TaaS) will be a thing until fully autonomous robot works in people’s homes.
- Tesla Autopilot is arguably the most and only working at-scale robot learning system to date.
- It offers many lessons for general-purpose robotics, one of which is deploy-time data scaling.
- Will Physical AGI come next year?
- Assuming that the past 10 years of self-driving industry was never a thing, and people only start to hype about it after LLM works (which is right now). If you don’t think you can start to build a self-driving company now and solve it in 3 years, then neither can we have general-purpose robot in this time.
- Training-time scaling makes robotics work.
Deploy-time scaling makes robotics business work.
Note: this blog is built around the concept of teleoperation-as-a-service (TaaS).
If you don’t know what teleoperation is, see my last blog.
What is TaaS: you pay as a customer for someone to remotely teleop/supervise the robot in your home doing chores.
It’s the end of 2025. Robot learning startups have been shipping demos relentlessly this year. Physical Intelligence, DYNA, Sunday, Generalist, 1x, Figure, etc., each selling a step closer to the dream of a robot nanny in people’s homes doing chores.

There’s hypes, and there’s critics.
When 1x announced they will be shipping home humanoid in 2026 with limited full autonomy + TaaS, MKBHD posted a quite critical video over concerns for a premature product release.


I’m here to answer the question as someone in the front row of this heated technological revolution: what’s the reasonable expectation for a home robot in everyone’s home (doing laundries, cleanings, dishwashings, etc.)? what’s the pathway to this dream?
My answer: Full autonomy (arguably Physical AGI) is 5-10 years away. Expect TaaS to emerge in the short run, for lessons learned from last 10 years of self-driving.
Here is a deep dive into why this makes sense.
Data Scarcity is the Problem at Core
Yes I know everyone probably knows robotics has a data problem at this point. But I need to read the problem to make the solution stand out.
First, let’s reiterate the methodology adopted by the current trending “school of thought” in making general-purpose robot work: that since we have seen immense success in LLM and Generative AI, where data and model scaling on the currently most efficient architectures (transformers and diffusion models) can produce AGI (arguably? [1][2]), the robotics industry should replicate every step of this scaling recipe. To this end, researchers add action as a modality to vision-language models (VLMs) and invented vision-language-action models (VLAs); companies started to build massive teleoperation factories to scale human data collections.
It all comes down to one word: scaling.
Scale like LLMs.
Copy their success stories.
Although there are many alternatives to real-world teleoperated data, like simulation (IsaacLab, Genesis, ManiSkill) and world models (like Gemini’s recent Veo for Robotics), that potentially can achieve super-linear scaling, they can never replace high-quality real-world on-embodiment demonstration. Sergey Levine calls these alternative data the “spork of AGI”, which I’m agreeing to more and more these days.
But scaling is costly. Scaling blindly without a positive cashflow is even more. I talked about it in my last blog. So instead of hiring teleoperation workers to teleop in artificial scenes, are there better ways?
This is where the self-driving lesson comes in: build a closed-loop experience you can push to the market first, and then gather data at deploy time and be in the position to scale when robots can be truly autonomous one day.
Tesla Autopilot: First Sell Experience (Manual), Then Autonomy.
If people are already paying for your robot even before you have it working fully autonomously, then you will have net zero (or even positive) cost of data scaling.
Tesla did not start as a self-driving company. From Roadster in 2008, to first Autopilot hardware in 2014, and finally Full Self Driving (FSD) in the 2020s era, it was a long way before Tesla became a high-flyer in frontier AI robotics. The luxury EV experience alone was sufficient to convince customers, who “selflessly” contributed to million hours of teleop data. In addition, Tesla sticked through with their vision-only roadmap, together with consistent sensor configuration throughout generations of vehicle models. These add up to a huge data advantage over any other self-driving players.
This is how to offload a robot data problem to an operations problem. Large-scale deployment solves data diversity, covers the long tail of corner cases, enables data collection automation, dilutes collection cost, etc.
Here Claude illustrates Tesla’s scaling of fleet size years ahead of FSD scaling, which only started in 2020:
Tesla: The Data Flywheel in Action
2024 Revenue
$97.7B
+6,600x since 2008
Cumulative Fleet
7.2M+
vehicles on road
FSD Miles (Dec 2025)
6.9B
real-world training data
Years to Profit
12
2008 → 2020
Revenue & Net Income
Annual figures in billions USD
Fleet Size & FSD Miles
Cumulative vehicles (M) and FSD miles (B)
If you haven’t watched it, I highly recommend the Tesla talk at ICCV and NeurIPS this year by Ashok Elluswamy, which deep-dives to some technical details of how Autopilot scales.
TaaS is the Ultimate Crank for Robotics Data Flywheel
Yes, yes, I heard you. Cars are still useful even when not autonomous (because you still can drive somewhere); robots are worthless if they can’t do chores autonomously; autonomous driving is dealing with a simpler problem on a 2D plane whereas robotics problems are more diverse and contact-rich. As Nano Banana puts it:

I agree with these arguments to an extent that robotics data flywheel is harder to crank up than autonomous driving, and requires a higher margin of “basic autonomy”, before large-scale deployment under human supervision becomes technically and economically sensible. But seeing research in the frontline, I believe we are in fact very close to meeting this requirement to scale TaaS today.
On mobile manipulators we have:
- Good old leader-follower arms teleoperation – used by Physical Intelligence and DYNA
- UMI-variants of mobile manipulation designed by Sunday and Generalist
On humanoid teleoperation, although there is still a way to go, general motion tracking has achieved major breakthrough in 2025. Academic results:
- SONIC (from NVIDIA GEAR)
- BeyondMimic (from Berkeley)
- TWIST2 (from Amazon FAR)
- GMT (from UCSD)
What can these systems do in 1-2 years?
Imagine this: the robot is instructed to load the dishwasher, and it can load most of the dishes and silverwares, but it has trouble fitting the biggest pot in. A human supervisor can, while monitoring 3-5 robots at the same time remotely, intervene on this task through teleoperation, and then remain in supervisory capacity. The takeover automatically triggers the data recording pipeline, and marks the process as a highly valuable learning piece.
These scenes are not science fiction. Things might not immediately happen: there are safety, privacy, operation, economy concerns to comb through. But nothing should stop people from building the system out on top of current research and have it ready to scale in the next year or two.
The Takeaway: Deploy-Time Data Scaling Makes Robotics Business Work
A year ago I wrote the blog for robot data scarcity still at training time. Now I’m writing a blog for the same problem but at deployment time. The speed of advancement in this field in 2025 has given me confidence to predict longer into the future.
But even with things happening at the speed of light, it still takes time to scale up robot learning systems at deployment. To help getting a sense of this problem, assume that the past 10 years of self-driving industry was never a thing, and people only start to hype about it after LLM works (which is right now). If you don’t think you can start to build a self-driving company now and solve it in 3 years, then neither can we have general-purpose robot in this time.
I’m pitching TaaS as the new hot word to be added to the long list of robotics startup ideas, because it is a natural derivation of the last 10 years of self-driving lesson. While training-time data scaling makes robotics work, deploy-time data scaling makes robotics business work.
For three years, LLMs sprinted. Scaling worked. As LLMs are moving beyond the age of pure scaling, the scaling for robotics is just beginning.
Physical AGI will not arrive overnight. TaaS is the bridge: ship before perfection; learn before autonomy.
The dishes are waiting.
