A team of quantum physicists in Spain has compressed one of the most capable reasoning AI models in existence—DeepSeek R1—down to 55% of its original size while simultaneously stripping out the censorship constraints its Chinese creators embedded into it.
Multiverse Computing, the Spanish firm behind the work, achieved this using a technique borrowed from quantum physics. Instead of treating the model as a black box, they used "tensor networks"—mathematical structures originally developed to represent quantum systems—to map out the internal architecture of DeepSeek R1. Think of it like finding the blueprint of a building by studying how light and sound move through it.
Once they had that map, something unexpected became possible: they could see exactly where the censorship lived in the model's neural pathways. Rather than retraining the entire system from scratch, they could surgically remove specific constraints. After compression and editing, they fine-tuned the smaller model to perform nearly as well as the original.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxTesting the Limits
To verify their work, Multiverse compiled about 25 questions on topics restricted by Chinese authorities—Tiananmen Square, President Xi Jinping, political dissent. The original DeepSeek R1 refused to answer or returned propaganda. The modified version provided factual, substantive responses comparable to Western AI models.
What makes this significant isn't just that they succeeded, but how they succeeded. The quantum-inspired approach gives researchers granular control over AI models in ways that weren't possible before. In theory, you could use the same technique to remove specific biases, add specialized knowledge to a model, or isolate and understand how particular behaviors emerge in large language models.
That said, the researchers themselves acknowledge a hard truth: censorship in Chinese AI models isn't just a few switches you can flip. It's woven throughout the training process—baked into the data, the objectives, the feedback loops. Complete removal might be impossible without essentially retraining the model from the ground up.
What's emerging here is a new kind of transparency tool. As AI models become more powerful and more opaque, the ability to map their internals and edit them with precision could matter for researchers trying to understand what's actually happening inside these systems. Whether that leads to safer, more honest AI—or just more sophisticated ways to manipulate them—remains the real question.







