A Tesla owner just drove from the West Coast to the East Coast—nearly 2,700 miles in under three days—without once taking manual control of the vehicle. David Moss's journey, shared widely online, has reignited a familiar question: how close are we really to self-driving cars?
The trip itself is genuinely striking. Tesla's Full Self-Driving software navigated highways at speed, merged through traffic, handled city streets, stopped at signals, and even guided the car to and from Supercharger stations. For three days, the system made thousands of micro-decisions with no human override.
But here's where it gets complicated, and where the gap between "impressive" and "autonomous" becomes clear.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxWhat Actually Happened
Tesla's system is officially classified as a supervised driver-assistance tool, not a true autonomous vehicle. The driver is supposed to stay alert and ready to intervene at any moment—legally and practically. The 2,700-mile claim rests on the owner's own reporting and shared data; neither regulators nor independent testing organizations have verified it. That doesn't mean Moss is lying. It means this is a demonstration of what one car did under one person's watch, not proof the technology is ready for unsupervised use at scale.
What's genuinely noteworthy is the progress. Earlier versions of Autopilot and FSD struggled with basic scenarios. This version handled construction zones, complex interchanges, and the kind of unpredictable real-world driving that still trips up many systems. The software has visibly improved.
But improvement and autonomy aren't the same thing. A system that works 99% of the time still needs a human ready for that 1%—and on a 2,700-mile drive, that 1% could matter a lot.
Why This Actually Matters
The trip highlights something important about how fast the technology is moving compared to the rules around it. Regulators haven't caught up. Liability frameworks are still being written. Public understanding of what these systems can and can't do remains fuzzy—partly because the companies selling them use language like "Full Self-Driving," which is, frankly, misleading.
Demonstrations like Moss's will keep happening. They'll keep pushing the conversation forward. But they also risk creating a false sense that the finish line is closer than it is. The real work—the regulatory clarity, the edge-case testing, the liability frameworks—is quieter and slower.
What's next isn't another coast-to-coast drive. It's the harder, less visible work of turning "this one car did this once" into "this system is safe for millions of people every day."









