A pendulum swings. An electrical circuit hums. A climate system shifts. Each looks impossibly complicated up close — hundreds of variables dancing together, impossible to predict by hand. But Duke University researchers have built an AI that does something remarkable: it watches these chaotic systems unfold and finds the simple equations underneath.
The insight isn't new. In the 1930s, mathematician Bernard Koopman theorized that even wildly nonlinear systems — ones that don't behave in straight lines — could be represented by much simpler linear models. Newton did this centuries ago when he connected force and motion with a few elegant equations. But translating Koopman's theory into practice has been the catch. Building linear models for genuinely complex systems requires hundreds or thousands of equations, each tied to its own variable. That's beyond what any human can reasonably hold in their head.
The Duke team's framework skips that bottleneck. It analyzes time-series data from experiments, searches for the patterns that actually matter, and uses deep learning combined with physics-inspired constraints to compress the problem. The result: a drastically smaller set of hidden variables that still captures how the system actually behaves.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxFinding the Landmarks
When the researchers tested their approach on pendulums, electrical circuits, climate models, and neural networks, something consistent happened. Each time, the AI uncovered a small collection of hidden variables that controlled the whole system. The reduced models were often more than 10 times smaller than what earlier machine-learning approaches needed — and they still made reliable long-term predictions.
"What stands out is not just the accuracy, but the interpretability," said Boyuan Chen, director of the General Robotics Lab at Duke. "When a linear model is compact, the scientific discovery process can be naturally connected to existing theories and methods that human scientists have developed over millennia. It's like connecting AI scientists with human scientists."
The framework does more than forecast. It can also identify attractors — stable states where a system naturally settles. For a dynamicist, finding these is like discovering landmarks in unmapped territory. Once you know where the stable points are, the rest of the system starts to make sense. You can spot when something is drifting toward instability, when a system is operating normally, when behavior is about to shift.
Not Replacing Physics, Extending It
This matters most when traditional equations don't exist or are too cumbersome to write. Climate systems. Biological networks. Emerging technologies. These are domains where physicists have incomplete maps. "This is not about replacing physics," said Sam Moore, the PhD candidate who led the work. "It's about extending our ability to reason using data when the physics is unknown, hidden, or too cumbersome to write down."
Next, the team plans to use this framework to guide experimental design — essentially, to help scientists decide what data to collect in order to reveal a system's structure most efficiently. They're also expanding beyond time-series data to richer inputs: video, audio, signals from complex biological systems.
The work points toward something larger: machine scientists that can assist with automated discovery. By combining modern AI with the mathematical language of how systems change over time, Chen's lab is sketching a future where AI does more than recognize patterns in data. It could help uncover the fundamental rules that shape the physical world and living systems.










