China has switched on a nationwide optical network that does something previously thought difficult: it lets distant data centers work together almost as well as if they were sitting next to each other.
The system spans 1,243 miles and connects the country's top computing facilities into what researchers are calling the world's largest distributed AI supercomputer. The key innovation isn't just the distance — it's that the network achieves 98% of the efficiency you'd get from a single, centralized data center. That's the work of a deterministic network channel that guarantees dedicated bandwidth, ultra-low latency, and near-zero packet loss. In practical terms: data moves reliably and fast enough that the computers can coordinate seamlessly, even thousands of miles apart.
Liu Yunjie, chief director of the project and a member of the Chinese Academy of Engineering, says the system is designed to accelerate AI model training and other compute-intensive research. It's part of China's Future Network Test Facility (FNTF), the country's first major national infrastructure project in information and communication. After more than a decade of development, it officially went live on December 3.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxThe real-world impact starts to show in the numbers. The team demonstrated the system by transmitting a 72-terabyte dataset from the world's largest single-dish radio telescope across 621 miles in just 1.6 hours. The same transfer over the regular internet would have taken 699 days. That's the difference between "we can do this research" and "we can't."
The distributed approach also fits neatly into China's "East Data West Computing" project, which deliberately shifts data processing to energy-rich western regions rather than concentrating everything in power-hungry eastern cities. By linking these geographically dispersed centers, the network lets researchers tap cheaper, cleaner computing resources without sacrificing speed.
What comes next
The infrastructure is already supporting the development of 5G Advanced and 6G technologies. Going forward, both research institutions and enterprises will be able to test new technologies on this platform — treating it as a shared testing ground rather than proprietary infrastructure. The project team has created 206 international and domestic standards and secured 221 invention patents in the process, suggesting this isn't just a one-off engineering feat but a foundation for how distributed computing might work at scale.
The challenge now is whether other countries can build something comparable, and whether this model — linking distant computing centers into a single logical machine — becomes the standard way large-scale AI research happens.






