Inside a tokamak reactor, plasma hotter than the sun's core is held in place by magnetic fields so delicate that even tiny instabilities can collapse the whole system. Now, one of the world's fastest supercomputers is learning to see those failures coming—milliseconds before they happen.
Aurora, an exascale machine at Argonne National Laboratory, performs a quintillion calculations per second. It's being pointed at fusion energy's central paradox: the reaction is inherently safe and clean, but keeping it stable enough to generate power is brutally hard. Researchers from Princeton Plasma Physics Laboratory are using Aurora to do two things at once—simulate the extreme physics of tokamaks in unprecedented detail, and train AI systems that can predict disruptions before reactor walls get damaged.
Why This Matters Now
Fusion has always promised the same thing: abundant, carbon-free energy from seawater. The fuel is everywhere. The reaction shuts down instantly if anything goes wrong. But for decades, the hard part hasn't been the physics—it's been the engineering. You need to hold plasma at 150 million degrees Celsius (ten times hotter than the sun's core) in a stable state long enough to get more energy out than you put in. Magnetic islands form. Plasma becomes unstable. Reactions collapse.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxWilliam Tang and Choongseok Chang are using Aurora to model plasma behavior at reactor scale, focusing on ITER—the massive international fusion project under construction in France. The simulations involve solving equations across multiple dimensions for trillions of mathematical particles. On older systems, these calculations took days. On Aurora, they take hours. That speed matters because it lets researchers run more scenarios, test more ideas, and move faster toward a working design.
But the real breakthrough is happening in a parallel project. Kyle Felker is training AI models on decades of experimental data from fusion facilities around the world—places like DIII-D and the Joint European Torus. The AI learns the subtle patterns that appear in the data just before a disruption happens. If it works, operators could see a disruption coming and intervene within milliseconds, turning fusion management from reactive (scrambling after things break) to proactive (preventing breaks before they start).
There's a feedback loop here worth noticing. AI systems are getting more powerful and more energy-hungry. That rising electricity demand is exactly why clean energy breakthroughs matter so urgently. As Tang points out, you can't keep scaling up AI without scaling up clean power generation. Fusion research and AI advancement are now pushing on each other.
Aurora itself opened to global researchers in 2025, though selected teams accessed it earlier through an early science program designed to debug the hardware and software while giving researchers the first real-world test. Those early projects are already showing what exascale computing can do when pointed at hard problems. The machine doesn't solve fusion by itself—it's a tool that reveals what you don't know, which is often the most valuable thing a powerful computer can do.
The next step is scaling up. Chang notes that solving fusion at commercial scale might require ten exascale computers working in concert. We have one. That's progress.










