In news that will either delight or slightly terrify you, a pair of humanoid robots from US firm Figure just tidied a bedroom and made a bed in under two minutes. And yes, they coordinated the comforter. Together.
The video shows two humanoids waltzing into a room, getting straight to work. A coat gets hung, a laptop gets closed, headphones find their rightful home. All the small indignities of a messy room, handled with mechanical precision. Because apparently, that's where we are now.

Robotic Room Service
But the real showstopper? The bed. These two metal marvels tackled the comforter, a notoriously floppy, uncooperative beast, with uncanny teamwork. They even did that subtle head-nod thing, like two humans silently agreeing on who gets which corner. The whole process was over in less than 120 seconds. Let that satisfying number sink in.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxThis isn't just about tidiness; it's a significant leap in robot collaboration and object manipulation. These bots, powered by Figure's Helix AI, didn't have a shared blueprint or a central boss barking orders. Each robot used its own cameras and internal rules to understand the room and, crucially, to figure out what its partner was doing just by watching. Every action changed the scene, prompting a real-time adjustment from both, all aimed at the shared goal of a perfectly made bed.
Handling that comforter was, as Figure noted, a monumental challenge. Unlike rigid objects, bedding has no fixed shape. It folds, it stretches, it shifts. The robots had to constantly guess each other's next move and adjust their grip, posture, and even their entire bodies. Which, if you think about it, is both impressive and slightly terrifying.

Seeing is Believing (for Robots)
Figure's secret sauce is its updated Helix AI framework, which now includes "perception-conditioned whole-body control." This means the robots don't just know where their own joints are (proprioception); they're also constantly processing what their onboard stereo cameras see. They turn those images into a real-time 3D understanding of their environment, letting them "see" and "feel" the ground simultaneously.
This improved vision system, trained in simulations with wildly random terrains, allows them to navigate stairs and uneven ground with surprising grace, even when lighting changes. It’s a major win in solving the "sim-to-real" challenge, where behaviors learned in a virtual world actually work perfectly in the messy, unpredictable real one.
Figure isn't just showing off; they're planning to scale up. They want to boost production of their Figure 03 humanoid from one robot a day to one every hour within four months at their new BotQ facility in California. So, your future robot roommate might be arriving sooner than you think. Let's just hope it also does dishes.











