When Hard Fork hosts Kevin Roose and Casey Newton sat down with Bernt Bornich, CEO of robotics company 1X, to discuss their humanoid robot Neo, they had an opportunity to interrogate one of the most dystopian business models to emerge from the AI gold rush. Instead, they offered little more than gentle prodding about a ‘feature’ that should have dominated the entire conversation - every Neo robot requires a human teleoperator, which means customers must pay for the privilege of 24-hour video surveillance inside their own homes.
Let's be clear about what 1X is proposing. This isn't just occasional remote assistance or troubleshooting. This is human monitoring of your private domestic space, as and when they please, dressed up as a necessary component of cutting-edge robotics. Roose's mild suggestion (to paraphrase) that some people with children might be uncomfortable, doesn't begin to capture the magnitude of this privacy violation. It is not discomfort, it's an invitation to normalise commercial surveillance at a scale and intimacy previously unimaginable.
Bornich's defence relied on the oldest trick in the tech playbook - justification through edge cases. Yes, someone in assisted living might accept this trade-off. Yes, a family with a child with severe autism might find value in robotic assistance, that outweighs privacy concerns. But these scenarios are precisely what they sound like - edge cases. They cannot and should not justify a broader business model that asks ordinary consumers to surrender the sanctity of their homes to corporate data collection.
The failure here isn't just 1X's audacious privacy overreach. I is the tech press's unwillingness to name it for what it truly represents. This isn't about robotics. It's about data. We've had three years to watch ChatGPT and its competitors gorge themselves on humanity's textual output. Now, as researchers like Fei-Fei Li have articulated, the next frontier is world models - AI systems trained not on words but on three-dimensional social space, on the physics of the real world, on the infinite granular details of how humans actually live.
1X has found a remarkably cynical solution to the world model training problem - convince customers to pay them for the privilege of providing the training data. Every Neo in every home becomes a node in a vast network, continuously feeding visual information about how humans move, interact, arrange their environments, and conduct their private lives. This is surveillance capitalism's final form. The user isn't just the product anymore, they are a paying subscriber to their own exploitation.
If 1X successfully deploys thousands of these robots, I expect that they will be acquired before long for a few billion dollars. Not because their robots are revolutionary, but because they will have amassed a dataset of human domestic life that no amount of synthetic data or lab environments could replicate. The acquiring company won't be buying robots, they will be buying the most intimate training data ever collected, funded entirely by the surveilled themselves.
Tech journalists need to do better. When a CEO casually mentions perpetual surveillance as a feature rather than a dealbreaker, that should be where the interview begins, not a footnote in a broader conversation about features. Our private homes are the last frontier, and we are watching in real-time as companies plot their conquest while the press nods along politely.